At the International Joint Conference on Artificial Intelligence 2017, an open letter was released, signed by over one hundred top scientists and industrialists in artificial intelligence, calling for a ban on the development of autonomous, artificially intelligent weapons, often referred to as “killer robots.”
This vapid gesture is equivalent to calling for a “ban on the development of knives that can be used to murder people.” The problem is that almost any device that can be taught behavior and is allowed autonomous functioning, can be employed as a “killer robot.” And all industrial artificial intelligence research advances intelligence, learning ability, and autonomy of machines.
Elon Musk might be in favor of a ban on the development of killer robots, but his Tesla company works on autonomous self-driving cars. Recent terrorist activities have demonstrated how cars can be used as weapons. You only need to teach a car to hit people instead of avoiding them.
Mustafa Suleyman might want to stop research into killer robots, but at the same time his DeepMind company is the leader in deep-learning research, which aims at allowing machines to learn patterns and respond to them. Such pattern recognizers can be easily placed in smart missiles or weaponized robots to autonomously find viable targets.
Jerome Monceaux signed the letter, while simultaneously heading Aldebaran Robotics, which develops general-purpose robots which can be taught or programmed to do anything — including using weapons and going on a murder-spree.
And the list goes on.
The whole point of artificial-intelligence research is to allow machines to do things that humans can do, preferably more efficiently and effectively, and preferably with a high degree of autonomy. Moreover, almost all modern artificial intelligence research is based on machine learning, i.e., teaching machines to behave in a particular way rather than directly programming them. Consequently, almost any artificial intelligence research can be used to teach machines to help people, or to behave as a weapon. This entails that machines that have the ability to operate as killer robots already exist.
Basically, the call for a ban on the development of killer robots amounts to a plea along the lines of: “Look, we are developing all this great technology which will bring fantastic benefits to humanity but please, please, please do not use it to murder people.” It is a call for sanity on the part of governments, the military, and terrorist organizations so that they won’t use the technology for evil. And we all know that the sanity of governments, military, and terrorists varies.
You cannot stop the possibility of (further) developing killer robots without a world-wide halt on artificial intelligence research altogether. I do not think that that is what any of these people who signed the letter, or anyone else, really wants. Or that it can be enforced, for that matter.
The best you can do is realize what artificial intelligence can be used for and then build in protections against misuse. For instance, autonomous self-driving cars should be strongly guarded against attempts to reprogram them. This is in the hands of Elon Musk and his competitors. Rather than calling for some kind of ban, they should do their jobs properly. And while I think they are trying to do a proper job, their call for a ban sounds like them trying to place the responsibility for misuse of their technology in the hands of others.
Any technology can be misused, and usually is. That is no reason not to develop beneficial technology. The benefits of autonomous artificial intelligence can be great. The dangers of it are lurking in the autonomy — technology which allows machines to operate autonomously, taking autonomous decisions on how to act, should be surrounded by stringent safeguards against the machines taking harmful decisions. But probably the biggest danger is not in the artificially intelligent machines themselves, but in the humans who place unwarranted trust in them to take autonomous decisions.
I applaud the fact that many influential people consider the dangers of artificial intelligence research seriously. The call for a ban, however, sounds like an after-the-fact plea.