We need to prepare for a world with robots

A panel of AI experts and ethicists urge politicians to take action

In January, the World Economic Forum claimed that we’re on the brink of a 4th industrial revolution thanks to advances in genetics, robotics, and artificial intelligence. They predicted that 5 million jobs will be replaced by robots by 2020 (7 million jobs lost and 2 million gained). Administrative jobs are the most at risk and women are thought to be more affected than men because of the job families that will be affected. 5 million jobs by 2020. That’s just 4 years away. It’s time to take this seriously.

Robots in the workforce

I’m sure you can imagine the sensationalist headlines. Robots are coming for our jobs. It’s the beginning of the end for humans. Skynet becomes reality. I agree that these are silly and nobody really thinks we’re 4 years away from robots rising against humanity but the concerns put forward by the World Economic Forum are very real. We’re already losing jobs to robots and it’s only going to get worse.

Image © Rethink Robotics

In the Chinese city of Dongguan, the Changying Precision Technology Company replaced 90% of its 650-strong workforce with robots. Now only 60 people attend to the robots and it’s hoped that some of those jobs will be replaced too, bringing the human staff to just 20 people. This has been a great move for the company; productivity has soared and there are far fewer defective products. Also, the robots never ask for breaks or wages. A future where robots start taking over human jobs isn’t really the future at all. It’s already here and there are no signs of things slowing down. Last year over $1 billion was spent on artificial intelligence (AI) research, which is more in one year than has previously been spent in the field’s entire history.

A few days ago at the 2016 AAAS Annual Meeting, a panel of AI experts and ethicists addressed the audience about our future with AI and how it could lead to greater inequality. They cited the current trends of self-driving cars and autonomous drones as the start of a future where robots take over many of our everyday tasks. It’s hard to imagine the impact robots will have on society when we’re talking about machines that cook for us or paint houses. It’s easier to think about self-driving cars because they’re already becoming familiar to us.

Moshe Vardi, professor of computer science at Rice University, spoke of the legal debates over who is to blame when an autonomous car is at fault in an accident. He also spoke about the effect these cars will have on workers. Over 30 million Americans use vehicles at work and, according to Vardi, “we can expect the majority of these jobs will simply disappear.” The panel said that driving will be fully automated within 25 years.

Perhaps the most worrying statement from the meeting was that widespread use of intelligent machines could lead to job polarisation. The highest skilled jobs will be too much for robots, at least for a while. Think of brain surgeons. The lowest skilled jobs will be too expensive to automate. Therefore the jobs most likely to be automated are those in the middle. Vardi argued that this would lead to great inequality and was disappointed that no presidential candidates were talking about the issue.

Robot ethics

Yale ethicist Wendell Wallach also spoke at the AAAS meeting and addressed the more immediate legal and ethical concerns of AI. He said, “there’s a need for more concerted action to keep technology a good servant and not let it become a dangerous monster.” Wallach argued that robotics and AI fields should be forced to spend some research funds on studying the ethical and legal concerns rather than only on development of the technology.

Wallach also brought up the use of autonomous weapons, voicing the opinion that they violate international laws. It’s easy to laugh and make jokes about Skynet but there is a real fear among the experts that we’re dangerously close to letting machines decide whether or not to kill humans. The technology is already sufficient and it won’t be long until drones and other weapons will choose battlefield targets autonomously.

Bill Gates spoke of this issue during his great 3rd Reddit AMA. When asked what technological advances we would see in the next 30 years, Gates highlighted the development of AI and robots. “Even in the next 10 years problems like vision and speech understanding and translation will be very good. Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.”

Answering another AMA question, Gates addressed the topic of AI becoming extremely intelligent. “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Bill Gates and Elon Musk aren’t alone. AI experts are keen to see progress in the field but wary about the potential threats. Even Stephen Hawking has chimed in. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,” said Hawking. It seems that a lot of smart people are excited about how our lives can be improved by AI, but wary about letting the AI have too much control. The solution is probably to always have a human component. We can have a computer that can make better decisions than us, but a human carries out the task or gives the go ahead to the other machines.

Some might argue that war isn’t a place where we should expect to find humanity anyway, but imagine war where soldiers are robotic and disposable. Imagine a world where richer countries can use autonomous machines to attack the humans of smaller nations. War is already pretty inhuman. It’s likely we’ll see it get worse.

Close to home

Before AI gets smart enough to be a problem in the sense of robots rising up against humanity, it’s likely that we humans will be the real threat when it comes to AI. You might think that AI is a long way away but already it’s helping you run your life. Computers are analysing your habits and suggesting what you should buy next, which route you should take home, and we’re readily becoming trusting of computers in helping make decisions.

The algorithms in our lives are a long way from Skynet or HAL9000 but the human element could be equally disturbing. If we come to rely on computers to make our decisions for us, can we trust the programmers responsible for the algorithms? Can we trust the companies paying the programmers? Are we sure there are no business or political agendas at play?

This problem extends to the bigger issues already raised about military decisions and autonomous combat. AI is a computer technology which means it has the potential to be hacked. As long as we’re depending on computers, there’s always a risk that other humans can take advantage of our reliance on machines.

Now is the time to talk about it. We probably won’t have our own C-3PO within a few years but the AI revolution is beginning. Our roads will be fully automated before long and we’re already losing jobs. As scary as that sounds, AI and robotics have the potential to make the world an infinitely better place. Driver-less roads will likely be safer roads. Most technology is morally neutral; it’s how we use the technology that really matters. It makes no sense to be anti-AI or anti-robot. What makes sense is that we think carefully about the implications now before it’s too late.

Now is the time to be making international laws regarding the use of robots in combat and to be discussing what AI should and shouldn’t have direct control over. Also, if your job can be done by a moving toaster with sensors, now is the time to learn some new skills.


Main image © Jeff J Mitchell