Insight Blog

Is Fearing AI A Smart Or Archaic Way Of Thinking?

Is AI a force for good or a force for evil?

Does the rise of the intelligent machines mean progression or submission of humankind?

It sounds like the underlying plot of an 80s sci-fi hit, but the debate surrounding robots’ dystopian future is one of 2017’s most contested, and most thrilling, issues. With the latest AI hype cycle in full flow, well known thought leaders such as Elon Musk and Mark Zuckerberg are taking sides.

There are compelling arguments backing both, but one scenario clearly points towards impending doom: the the idea of machine self-preservation.

The argument goes as follows: You build a program for optimizing food production, instructing the collection, sorting and delivery of corn. It will reduce organic waste, align supply and demand, and closely track farms to ensure best corn output. So your program gets to work, producing better corn than ever before, and more of it. Soon the program starts to order demolition of towns to make way for more cornfields. It taps into traffic systems, prioritizing green lights for corn lorries over ambulances. It terminates employment contracts of farmers to save money for more productive machines. In other words, the program keeps doing what it was built to do, above anything else, until the task is complete or the machines are shut down. The program ‘preserves itself’ over both the welfare and the existence of the human race.

At first glance,  it’s a common sense argument for the end of humanity. But isn’t self-preservation precisely what humans have been doing throughout our entire existence?

Aren’t we the reason the rainforests are perishing, with our need for extra dairy farm land? Aren’t we the reason for the sixth mass extinction happening right now for over 22,000 species on Earth? Aren’t we the reason the oceans are being poisoned, the gap between rich and poor is widening and the temperature of the planet is going through the roof?

The point is: we humans are obsessed with optimizing ourselves. Humans used our ability to control and tame fire to help us survive longer. We built factories to make us more productive. We go on yoga retreats to improve our minds and bodies – and our ability to get even more stuff done.

It seems the intelligent machine debate isn’t about the transfer of control and power from humans to algorithms. It’s instead about the elephant in the room: at what point should we stop trying to be better?

Computer scientists, historians and philosophers investigate the cost of progress and have done for years. But by separating human optimisation from machine optimisation misses a crucial point.

AI is the product of human intelligence. Although machines will soon ‘think’ for themselves, their development relies on human ideas. In other words: it’s the morals of humans that must be assessed, not the lack of morality of machines.

The anxiety around the future of AI is a misplaced concern. History shows that diversity of thought, and sometimes opposing morals, makes it hard to come to an agreement of what is right and wrong in society. Building intelligent machines doesn’t get rid of this problem – if anything, it speeds up the need for resolution. Our current political climate is showcasing how divided our world can really be. And yet our rate of technological progress is unlike anything we’ve seen before. So the question around the cost of progress is one we cannot delay in addressing – and let’s be honest, it’s a pretty exciting problem to get working on.

So, should we keep investing in AI? If we believe in continuing progress in medicine, in logistics, in transport and in education, then the answer is a resounding ‘yes’. Without equal focus on the dangers of optimization over morality, that answer downgrades to a sheepish ‘maybe’.

Should we fear AI? No. The fear should instead be used as a motivator in ensuring what we build and who is building it, is held to account. And that doesn’t just mean ethics councils at Google. That means better communication of what AI even is, and what problems are being tackled, as well as taking a hard look at the real needs for society. It means really getting to grips with what makes AI exciting as a positive world-changer, not another toy for the Silicon Valley elite.

As Zeynep Tufekci – a ‘techno-socialist’ – says in her popular TED talk: “We cannot outsource our responsibilities to machines… We must hold on ever tighter to human values and human ethics.”

What do you think? Share your thoughts with us below!

Gemma is Co-Founder of Science: Disrupt – an organisation connecting the innovators, iconoclasts & entrepreneurs intent on creating change in science. Science: Disrupt produces podcasts events and editorial, and has brought together a large community (both on and off-line) of brilliant thinkers and doers. Gemma focuses on biotech, energy, space, health, advanced computing & changing the way we do academic research. She is also a Freelance Journalist, writing for The Guardian, Adweek, Imperica & Ogilvydo, covering science, tech, culture and politics. Gemma is an International speaker having delivered keynotes at SXSW, TEDx, WPP Stream, Cannes Lions and Dubai Lynx. Previously, Gemma was the Tech Innovation Strategist at Ogilvy Labs. @gkmilne1

Video

00:0
Beginnen Sie hier

Kontaktieren Sie uns noch heute

Wie können wir Ihr Unternehmen unterstützen?