Who is evgeny morozov




















This event marked a crucial moment in recent history, as for the first time it was possible to transfer intelligence to an inanimate object. Founded in by Dan Gavshon-Kirkbride and James Pockson, PostRational deploys methods of research, speculative fiction, and world-building as critical design tools with which to explore questions outside the realm of conventional design and architecture. Evgeny Morozov, The Net Delusion , Watch the video statement to find out more about what Morozov is looking for in applications:.

Joana Moll Inanimate Species In , a group of engineers designed the first commercial microprocessor, the Intel PostRational Cyber-Waste Founded in by Dan Gavshon-Kirkbride and James Pockson, PostRational deploys methods of research, speculative fiction, and world-building as critical design tools with which to explore questions outside the realm of conventional design and architecture.

Mentors Edition 5. Published: 7 Jul US power to rule a digital world ebbs away. For 30 years the model of a global village dependent on American innovation worked Now that illusion is fading fast. Published: 10 Jun The data-mining scandal offers a unique chance to reclaim our private information and use it in a way that will benefit us all. Published: 1 Apr Billion-dollar debts control the future of tech industry.

Published: 11 Mar Internet users have fed firms their personal data — which in turn is feeding the rapid growth of AI. Has the industry consumed all it needs from the web?

Published: 28 Jan The digital hippies want to integrate life and work — but not in a good way. Data firms such as the rapidly expanding WeWork hope to blur the line between home and office. Published: 3 Dec Parent company Alphabet would provide services in response to data harvested.

Published: 22 Oct About 56 results for Evgeny Morozov 1 2 3. It was described as "brilliant and courageous" by the New York Times. In his second book To Save Everything, Click Here, Morozov critiques what he calls "solutionism" — the idea that given the right code, algorithms and robots, technology can solve all of mankind's problems, effectively making life "frictionless" and trouble-free.

Morozov argues that this drive to eradicate imperfection and make everything "efficient" shuts down other avenues of progress and leads ultimately to an algorithm-driven world where Silicon Valley, rather than elected governments, determines the shape of the future. Some of the technologies you describe as "solutionist" many people find useful. For instance self-tracking gadgets that encourage people to exercise, to monitor their blood pressure or warn them about their driving habits and reduce their insurance premiums.

The people who start self-tracking are successful and have nothing to lose. If you can self-track and prove you are better than the average person — are healthier or drive more safely — you can get a better deal and claim some benefits. Yet eventually we will reach the point where people who decide not to self-track are assumed to be people who have something to hide.

Then they have no choice but to start self-tracking. Very often the people of Silicon Valley who promote these technologies say we have the choice, we have complete autonomy, and I am saying this a myth. Very often self-tracking solutions are marketed as ways to address a problem. You can monitor how many calories you consume; monitor how much electricity you are consuming. It sounds nice in theory but I fear a lot of policymakers prefer to use the self-tracking option as an alternative to regulating the food industry or engaging in more structural reforms when it comes to climate change.

All solutions come with cost. Shifting a lot of the responsibility to the individual is a very conservative approach that seeks to preserve the current system instead of reforming it. With self-tracking we end up optimising our behaviour within the existing constraints rather than changing the constraints to begin with.

It places us as consumers rather than citizens. My fear is policymakers will increasingly find that it is much easier, cheaper and sexier to invite the likes of Google to engage in some of this problem-solving rather than do something that is much more ambitious and radical.

They are not bound to make us dumb, but the way they are currently implemented makes that a possibility. We need to know what we want from such devices: Do we want them to obviate problem solving? To make our lives frictionless? Or do we want these new devices to enhance our problem solving — not to make problems disappear but assist us with solving them?

A lot of these devices seek to reward or punish in social currency. For instance, people from Silicon Valley say one way to improve voter turnout is to give people points for checking in with their smartphones at the voting booths — it might even work, people will show up because you show them coupons, but it risks recasting politics in a way that would make any further appeals to ethical behaviour impossible, once you use the language of coupons you need to talk to people in that language in all walks of life, whether it be picking up litter or turning off the lights.

Do you want people to turn off the lights because they will get a coupon or because they have some ethical, environmental concerns? You don't hear people in Silicon Valley talk about the ethical and moral dimension. They are not concerned with anything like citizenship at all.

A lot of the services they build are useful services. I use Google products all the time. People who are building a service which I pay for with my privacy or money I'm quite okay with. But as time goes by they aspire to do many things that go beyond their business and their initial set of commercial concerns. We don't treat them with the level of criticism and scrutiny that they deserve, we assume they are in the business of information which is a benign business and they are part of the enlightenment project.

We tend not to think they have shareholders, commercial agendas and are run by people who might not have a very deep appreciation of the human condition and the world around us.

I have a lot of respect for these people as engineers but they are being asked to take on tasks that go far beyond engineering. Tasks that have to do with human and social engineering rather than technical engineering. Those are the kind of tasks I would prefer were taken on by human beings who are more well rounded, who know about philosophy and ethics, and know something about things other than efficiency, because it will not end well.



0コメント

  • 1000 / 1000