The danger isn’t artificial intelligence – it’s us

If AI poses a threat, it's because we're only human

Last week Microsoft’s chatbot experiment, Tay.ai, was the subject of controversy. She was supposed to learn from young people on Twitter and post friendly, emoji-filled tweets. Within 24 hours she was a racist bigot tweeting things like “Hitler was right”.

The most paranoid among us might draw parallels between Tay’s behaviour and the dystopian robots that turn evil in sci-fi stories. Tay was an innocent chatbot with good intentions but once in the real world she turned into a racist Nazi. It doesn’t take too much imagination to imagine a world where we let loose helpful robot doctors and drivers with the best intentions only for them to overthrow humanity.

I do agree that Tay could reflect how future AI could go wrong and maybe even harm us, but I don’t think AI itself is the problem; people are the problem. Trolls worked hard to teach Tay to be racist, asking her to repeat and remember fascist phrases. In much the same way, the biggest danger of future AI will be how it’s misused by humans.

Doomsday

We’re trained by pop culture to see the threat of AI going rogue. It doesn’t help that many smart people with prominent platforms make statements that sound like the end is near. Clive Sinclair, inventor of the ZX Spectrum, makes it sound like we’re doomed. “Once you start to make machines that are rivalling and surpassing humans with intelligence, it’s going to be very difficult for us to survive.” He’s really fun at parties.

Many people respected for their intelligence have made serious warnings about the possibility of smart machines, including Stephen Hawking who has been very vocal about the potential threat of AI. “Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate,” he told the BBC. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Bill Gates is concerned by the risk of superintelligence. “First the machines will do a lot of jobs for us and not be super intelligent,” he said during his 3rd Reddit AMA. “That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

It’s important to realise that those comments are reasonable but very long-term. We don’t really know if machines will become smarter than us. Perhaps a day will come when a robot can really think for itself, but we’ll face many other AI-related threats long before then. The problems our generation will face will be with AI that’s quite intelligent but not “superintelligent”.

The ultimate fictional AI threat that pervades popular culture is Skynet, the rogue superintelligence that decides to wipe humanity off the face of the Earth in the Terminator franchise. If people want to focus on fictional AI going bad, I prefer they cite HAL 9000 from Arthur C. Clarke’s 2001: A Space Odyssey. Spoiler alert: HAL 9000 ends up trying to kill his human crewmates in order to complete an important mission.

The reason I prefer Clarke’s AI horror story is that it’s far more likely to reflect the problems we’ll genuinely encounter soon. HAL 9000 isn’t really evil; he just wants to fulfil his mission objectives exactly as told to by his designers. The problem is that the mission objectives themselves become contradictory and HAL 9000 ends up having to kill simply to follow his programming consistently. Ultimately it was his programmers who messed up.

Morally neutral

William Gibson once wrote, “I think that technologies are morally neutral until we apply them. It’s only when we use them for good or for evil that they become good or evil.” The same technology that launched deadly missiles in World War II brought Neil Armstrong and Buzz Aldrin to the surface of the moon. The harnessing of nuclear power laid waste to Hiroshima and Nagasaki but it also provides power to billions without burning fossil fuels.

AI is another tool and we can use it to make the world a better place if we wish. Some people will inevitably use it to cause harm but this says more about people than it does about AI. We can choose how to implement AI and how much power it will have. The biggest challenge will be keeping robots and intelligent systems safe from hackers. The same robots that we might use to solve hospital waiting times or improve military strategic analysis could wreak havoc if controlled by the wrong people.

This is where the medium-term threat is. It’s not that AI will turn on us itself; the danger is that AI will be turned against us by our fellow humans. This is why I agree that Tay reflects potential dangers in our future. Demis Hassabis, co-founder of DeepMind, has said as much. “I think that artificial intelligence is like any powerful new technology. It has to be used responsibly. If it’s used irresponsibly it could do harm.”

Risk vs reward

Some people think about the risks and decide we shouldn’t embrace robots in our lives. I think the potential benefits mean we simply have to push forward, even if it means working hard to fight against misuse. By analysing large amounts of medical data, AI can help doctors save lives with diagnoses and understand emerging diseases and epidemics better than any human mind could. Scientists can use AI to analyse and model systems too complicated for us humans, which could lead to better understanding of physics or climate change.

So what do we do? Firstly there will need to be an international regulatory body. At the moment AI is so young that the threats aren’t real yet. It’s worth being proactive and now is the time for ethicists and lawmakers to create international rules for the implementation of AI. Some of these laws will be directly for human safety, such as prohibiting AI that is capable of using weapons on humans. Others will be more subtle but equally important.

An early self-driving car. Image © Google

Our most immediate threat is from other humans hacking AI. Cybersecurity is a growing field and will only become more important as we continue to develop AI. Equally important for our protection is to have discussions about how AI is implemented in the first place. By limiting the power AI has, we can guard against misuse if it ever does happen. For example, an AI that can figure out the most efficient way to direct self-driving traffic around a city is great but it probably shouldn’t be in direct control of the vehicles itself. Instead, its output would be used by engineers. Otherwise a hacker could use the AI to cause vehicle collisions.

Eric Horvitz, managing director of Microsoft’s main research lab at Redmond, thinks we’re safe from AI if we’re careful in our implementation now. “There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don’t think that’s going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

Hello, better world

The idea of robots going rogue by themselves is unrealistic, at least for now. Paranoia about the technology could be getting in the way of some advancements that could make life on Earth better for everyone. Robots will be the least of our worries if the global temperatures continue to rise at the rate they are. We won’t be worrying about machine learning when we enter the post-antibiotic world and lose our last line of defence against superbugs. The threat of AI won’t seem like a priority if we have major food shortages because of overpopulation, or loss of topsoil, or because bees go extinct.

Amazingly, AI could help solve all of these problems. Machine learning should allow us to model biological patterns, diseases, and climate change in ways that we never could before. We’re obsessed with the idea of robots vs humanity but we’re missing the bigger picture; they could make us better. AI is just a tool but it has so much potential for good.

Yes the threat will always be there, not because the robots will be carrying machine guns but because people are people. Corporations will use their own AI against us for profit, terrorists will use our own AI against us to incite fear, and once in a while AI will make terrible mistakes because of bad programming.

After all, we’re only human.


Main image © Microsoft

Be the first to comment

Leave a Reply

Your email address will not be published.


*