Killer robots are here already

August 25, 2017

At the International Joint Conference on Artificial Intelligence 2017, an open letter was released, signed by over one hundred top scientists and industrialists in artificial intelligence, calling for a ban on the development of autonomous, artificially intelligent weapons, often referred to as “killer robots.”

This vapid gesture is equivalent to calling for a “ban on the development of knives that can be used to murder people.” The problem is that almost any device that can be taught behavior and is allowed autonomous functioning, can be employed as a “killer robot.” And all industrial artificial intelligence research advances intelligence, learning ability, and autonomy of machines.

Elon Musk might be in favor of a ban on the development of killer robots, but his Tesla company works on autonomous self-driving cars. Recent terrorist activities have demonstrated how cars can be used as weapons. You only need to teach a car to hit people instead of avoiding them.

Mustafa Suleyman might want to stop research into killer robots, but at the same time his DeepMind company is the leader in deep-learning research, which aims at allowing machines to learn patterns and respond to them. Such pattern recognizers can be easily placed in smart missiles or weaponized robots to autonomously find viable targets.

Jerome Monceaux signed the letter, while simultaneously heading Aldebaran Robotics, which develops general-purpose robots which can be taught or programmed to do anything — including using weapons and going on a murder-spree.

And the list goes on.

The whole point of artificial-intelligence research is to allow machines to do things that humans can do, preferably more efficiently and effectively, and preferably with a high degree of autonomy. Moreover, almost all modern artificial intelligence research is based on machine learning, i.e., teaching machines to behave in a particular way rather than directly programming them. Consequently, almost any artificial intelligence research can be used to teach machines to help people, or to behave as a weapon. This entails that machines that have the ability to operate as killer robots already exist.

Basically, the call for a ban on the development of killer robots amounts to a plea along the lines of: “Look, we are developing all this great technology which will bring fantastic benefits to humanity but please, please, please do not use it to murder people.” It is a call for sanity on the part of governments, the military, and terrorist organizations so that they won’t use the technology for evil. And we all know that the sanity of governments, military, and terrorists varies.

You cannot stop the possibility of (further) developing killer robots without a world-wide halt on artificial intelligence research altogether. I do not think that that is what any of these people who signed the letter, or anyone else, really wants. Or that it can be enforced, for that matter.

The best you can do is realize what artificial intelligence can be used for and then build in protections against misuse. For instance, autonomous self-driving cars should be strongly guarded against attempts to reprogram them. This is in the hands of Elon Musk and his competitors. Rather than calling for some kind of ban, they should do their jobs properly. And while I think they are trying to do a proper job, their call for a ban sounds like them trying to place the responsibility for misuse of their technology in the hands of others.

Any technology can be misused, and usually is. That is no reason not to develop beneficial technology. The benefits of autonomous artificial intelligence can be great. The dangers of it are lurking in the autonomy — technology which allows machines to operate autonomously, taking autonomous decisions on how to act, should be surrounded by stringent safeguards against the machines taking harmful decisions. But probably the biggest danger is not in the artificially intelligent machines themselves, but in the humans who place unwarranted trust in them to take autonomous decisions.

I applaud the fact that many influential people consider the dangers of artificial intelligence research seriously. The call for a ban, however, sounds like an after-the-fact plea.

Advertisements

AI storytelling

August 6, 2017

Recently I read in a newspaper a list of average predictions of AI researchers on when certain achievements in AI would be reached. There were several predictions for the coming 5 to 10 years, such as an AI winning a game of StarCraft against a human champion (2022), and composing a top-40 song (2027). Only one prediction was made a considerable number of years in the future, namely writing a New York Times bestseller (2050).

I was not surprised about the short-term predictions. These were all straightforward extrapolations of today’s research. For instance, a lot of time is invested in creating StarCraft AI, and we know that a computer already has a huge advantage over humans in its speed; it just needs to get a bit better tactically to defeat human champions. Similarly, computers already write music that is indistinguishable from what humans compose, so I can see a computer write a top-40 hit today — the main problem I see for writing a song that lands in the top-40 is that the quality of the song is only a very small factor in determining whether it becomes a hit.

Why is writing a bestseller considered to be much more difficult than any of the other AI tasks?

Writing a novel is very different from composing music or creating a painting. When listening to music or watching a painting, people give their own interpretation to what they hear or see, and the computer can get pretty far by simply recombining elements of music or paintings that it has been trained with. For instance, David Cope’s first attempts at letting a computer compose music amounted to hacking Bach’s sonates into measures which were stored in a database, and then recombining these measures by making sure that the last note of each measure was the same as the last note that originally came before the measure that was chosen next. This resulted in thousands of sonates which sounded more or less like Bach sonates. The computer did not need to understand what it was doing. In contrast, when writing the text of a novel an AI needs to understand what it is writing, otherwise the text will not make sense.

You might think that no real understanding is needed to create a text, as we already see human-readable text produced by computers today. In particular, newspaper articles are often written by computers. Examples are weather reports, stock market analyses, and sports reports. However, in these cases the computer is not really producing an original text. The computer simply gets the data that need to be reported (temperatures, market fluctuations, goals scored) and translates the data according to specific rules into a text. No creativity is needed. To produce a novel, the computer must come up with an original, sensible plot that has relevance to humans, and turn that plot into a captivating text. As far as I can see, the computer cannot do that without quite deep understanding of the human condition, human emotions, human language, and the human world. And at the moment, we have no idea how we can give a computer such understanding.

Someone who has heard of “deep learning” might think that it is sufficient to train a computer with existing novels to allow it to produce a new novel. But what are you then really training the computer with? You are training it with strings of words. This might lead to a computer being able to recognize that certain strings of words are likely to be sensible sentences, but not that a string of 40,000 words is a sensible, well-readable novel. Examining words does not equate examining the plot, the meaning, or the literary quality of the novel.

You might think that we may solve the problem of making the computer create a bestseller by simply letting it produce random texts, and then assess the quality of the texts with respect to them being a bestselling novel. Using an evolutionary approach, this might reasonably quickly lead to a novel that scores high on bestselling quality. This approach might actually have merit to it, if we could give the computer an algorithm that rates a text as a bestseller. We do not have that, as even humans cannot predict whether a novel will be a bestseller.

Take, for instance, the first Harry Potter novel, which was rejected by virtually all British publishers, and only produced in a very small quantity by the last one because his 8-year-old daughter liked the book. Considering Rowling’s unskilled writing and weak plot construction, it is not surprising that the publishers did not see her combination of a mid-20th-century boarding school novel with a childish version of Lord of the Rings as likely to succeed. Expert humans did not assess Harry Potter to be the commercial success that it came to be. So expert humans cannot teach a computer to do it for them.

If expert humans cannot tell a computer how to rate a novel, you might still envision an approach by which a computer determines by itself an evaluation function for bestselling quality. If you have millions of books which are all labeled with their relative sales figures, and extra data with respect to the time when and place where the books were a success, you may be able to use them to train a computer to come up with an evaluation function that can accurately predict from the contents of a novel whether or not it will be a success. Perhaps that is possible. Perhaps not. Frankly, I think that if it was easy, then all those publishers who rejected Harry Potter would have internalized an algorithm like that and would at least have seen some value in the book, but evidently they did not.

If a computer would have a much deeper understanding of the world than any human has, it would have insights that humans cannot have. And with such insights, be able to predict bestselling quality. I believe that in principle it is possible for a computer to have much deeper understanding of the world than humans have, but we are far, far away from having such a computer since, as far as I know, nobody has any idea on how to give a computer what is needed for it to gain understanding. The conclusion that I must draw is that it probably is not impossible to get a computer to write a bestseller, but that creating a computer that can do that is not a straightforward extrapolation of the state-of-the-art in AI. Therefore, attaching any year to it is unwarranted.

So where is the year 2050 coming from in the minds of the average AI researcher? I think it represents “About 25 years in the future? Who knows what we can do by then!”

Basically, predicting an AI achievement for 2050 is equivalent to AI researchers saying “we have no idea.”


Ethical cars

June 28, 2017

The first completely autonomous machines that will invade society as a whole might very well be self-driving cars. With “completely autonomous” I mean that these cars will perform their duties without any interaction with their owners, making their own decisions. Obviously, there is great commercial value in such transportation devices. However, allowing them to take responsibility for their own actions in the real world may involve considerable risk. For how can we be ensured that the decisions of these cars are in alignment with what we humans find morally acceptable?

A typical scenario that I get confronted with, is a self-driving car which has to swerve in order to avoid hitting a dog, but if it does that, as a consequence, hits a human. While obviously we would prefer the car to avoid hitting both dogs and humans, if there is no choice but to hit one of them, we would like the car to then choose to hit the dog. A potential solution to this scenario would be to outfit the car with ethical rules along the lines of Isaac Asimov’s three laws of robotics, e.g., with a rule that says “do not harm humans” given priority over a rule that says “do not harm dogs.”

However, the specification of such rules is not a trivial matter. For instance, it is logical that a rule would state “you have to obey the laws of traffic.” This would entail that the car is not allowed to drive through a red light. But what if the car stops for a red light, while a traffic warden motions it to continue driving? You may update the rule stating that an exception is made for directions given by traffic wardens. But what if there is no traffic warden, the car has stopped for a red light, and a police car sounding its horn is coming from behind and cannot get past unless the car drives forward a bit (through the red light) to park to the side? You may update the rule even more to take that situation into account, but is this then covering each and every situation in which the car is allowed to break the rule that it should stop for a red light? Probably not.

The point is that human drivers ever so often break the rules of traffic to avoid a problematic situation. You are trying to pass another car which drives fairly slowly, and suddenly that car speeds up. You can still get past, but you have to drive faster than the speed limit for a few moments. So that’s what you do. Or you are at a crossing in a deadlock with two or three other cars. One of them has to break the rules and start moving, otherwise they will all be stuck there forever.

The point is that human drivers improvise all the time. They know the traffic rules, they have been trained to recognize safe and dangerous situations, and know how to anticipate on the behavior of other drivers. And sometimes they bend or break the rules to avoid problems. A self-driving car that cannot improvise is dangerous. However, a consequence of the need for improvisation is that any rules that we would want to impose on the car, it should be able to break. The only alternative would be to envision each and every situation in which the car could find itself and specify the exact behavioral rules for it to deal with all those situations. Clearly, that is impossible.

So how do we get a car to behave like a responsible driver without laying down an endless list of rules? The answer is: by training it. First, we let the car drive in a highly realistic simulation, punishing it every time that it causes a situation that is undesirable, and rewarding it when it manages to perform well. A learning structure can incorporate the lessons that the car learns, thereby bringing it ever closer to being a model driver. Once it is perfect or almost perfect in the driving simulation, it can be let loose on the road under the guidance of a human, continuing learning. In the end, it will behave on the road as well as, and probably a lot better than, a good human driver.

How will such a car deal with a choice between hitting a human or a dog? It is likely that similar situations will have cropped up during the training process — maybe not with exactly the same race of dog and the same human as in the real situation, but as the car has been trained instead of having been given specific rules, it has the ability to generalize, and it will make the choice that is closest to what the training would have rewarded while avoiding choices that the training most likely would have punished. In other words, it will choose to hit the dog to avoid hitting the human, just as it would likely hit a cat, a moose, a badger, or a duck in order to avoid hitting a human.

It might, however, in a situation where someone pushes a mannequin in the road, hit a dog to avoid hitting the mannequin — not because it thinks the mannequin is a human, but because the situation of hitting the mannequin more closely resembles hitting a human than the situation of hitting a dog does. If we do not want the car to make that choice, we should ensure that its training regime includes situations in which it has to deal with objects that resemble humans but are not humans. This, however, could lead to a situation in which it chooses to hit a highly inert human to avoid hitting a dog. That’s the problem with allowing a car to make its own choices based on how it is trained: you can probably always find an exceptional situation in which it is not doing what we hoped it would do. The same is true for humans, of course, and in the end the self-driving car will probably still be a much safer driver than any human.

So if one wonders how we can be sure that the ethics of a self-driving car will be acceptable to us humans, the answer is that we can only draw conclusions based on observations of how the car deals with tough situations. We will not be able to open up the car’s brain and examine some kind of ethics module to read how it will deal with situations that come up. Therefore there is no way for us to be “sure.”

We can only draw comfort from the fact that if at some point the car takes a decision that we find doubtful, we can punish it and it is likely to make a different decision when a similar situation comes up again. It will be less stubborn than the average human in that respect.


The digital overlords are here

May 6, 2016

I read a very nice statement of prof. Pedro Domingos of the University of Washington in the Dutch newspaper NRC of May 4, 2016. In answer to the question “What do you tell people who are afraid that self-learning computers are getting so smart that they will take over the world?” he said: “Computers are stupid and they already took over the world. It would be better if they would be smarter.” That’s going to be my stock answer to this question from now on.


Artificially stupid ducks

June 16, 2014

“The Eugene Goostman chatbot passed the Turing test. So now we finally have real artificial intelligence.”

That is what was reported recently by many news outlets. Of course, it is horribly wrong.

A chatbot is not intelligent. A chatbot has no understanding of what it says. A chatbot simply delves into a database of previously stored sentences (usually automatically retrieved from the Internet), and loosely links them to what the person who is testing the bot is typing. It uses non sequiturs instead of actual answers, repeats a persons statements back at him, and switches from topic to topic without rhyme or reason.

The authors of Eugene Goostman gave their bot the backstory that it was a 13-year old boy from the Ukraine, whose native language was not English. The fact that he was supposed to be a foreigner was introduced to make members of the jury more forgiving of the irrational answers that the bot provided. The fact that he was supposed to be 13 years old was introduced to make members of the jury more forgiving of the nonsensical switching of topics and general lack of knowledge and understanding. If that cheap trick is considered acceptable, then we have had artificial intelligence for many years now.

I mean, I have a program wherein you can type any text that you want, and it will never respond. As such, it functions nicely as a replica of an autistic person. It would also be relatively easy to create a program that resembles a heavy sufferer of Tourettes.

But even if the authors had not coined up this backstory, and were still able to fool 10 out of 30 judges — would we then have to conclude that Eugene Goostman is ‘real’ artificial intelligence? Would Alan Turing conclude that?

The answer is “no”. The Turing Test is one of the most misrepresented tests in the history of science. It is not a litmus test for artificial intelligence. It is merely an illustration of a philosophical stance that Alan Turing took.

The issue is as follows: how can we know whether a computer is intelligent or not? When Turing was alive, this topic was hotly debated amongst computer scientists and philosophers. Some claimed that a computer can never be ‘really intelligent’, as you can examine its programs and databases and (theoretically) derive exactly how it produces its answers. The counter-argument is that you can also open up a person’s brain and (theoretically) derive exactly how that person produces his answers. So what features would you want a computer to have, which allow you to unequivocally state that it is ‘really’ intelligent?

Alan Turing’s answer was: it is not important what is inside the computer; what is important is its behavior. If a computer’s behavior is indistinguishable from an intelligence, we should conclude that it is intelligent. Even if we could open up the computer, look inside, and point out some features that make us say: “You see that? That is how that intelligent behavior is generated!” that would only teach us something about how intelligence comes about, and would not invalidate the computer as an intelligent being (unless we open up the computer and see a human inside who provides all the answers, of course).

The Turing Test is only an illustration of Turing’s philosophical principle. He says that if a computer can converse so well that you cannot distinguish it from a human, then the computer converses as well as a human, and thus converses intelligently. There is no stipulation like ‘conversing for only 5 minutes’ or ‘the computer is allowed to limit the topics’ or ‘the computer should be forgiven for bad English’. Such stipulations would make no sense, because an intelligent conversation should demonstrate an understanding of the world. A chatbot that does not at least encompass a model of the world can never demonstrate an understanding. Simply reflecting sentences that you pick off the Internet might fool some uninitiated people for a while (that is not too hard, ELIZA managed to fool Joseph Weizenbaum’s secretary in 1964), but it will fool nobody for longer stretches of time.

The whole point is that Turing wanted to introduce the Duck Test for artificial intelligence — if it looks, swims, and quacks like a duck, you should conclude that it is a duck. We now know that it is not hard to fool a couple of people for 5 minutes into thinking that just maybe that computer over there is actually a human. We can do that due to the enormous speed that computers have achieved in processing data, and the huge storage capacity that modern computers have. But despite the fact that, by itself, it is not an easy task to make people think that a computer is conversing like a 13-year old Ukrainian boy, succeeding at that task is not the same as succeeding at creating an artificial intelligence.

As written, the Turing Test is not a test of artificial intelligence. Turing’s principle, however, stands: the Duck Test is the only viable way of determining whether a computer is really intelligent. However, we should realize that the duck itself is much bigger and much more complex than Turing’s original illustration sketches.


Talkshow science

April 12, 2011

Today I was interviewed by a group of students on the future of artificial intelligence. I am not an expert on that subject by any means, but this was for a course and as I have some ideas in this area I was happy to help them out.

A large part of the interview was on Ray Kurzweill’s claims that strong artificial intelligence will be achieved within a few decades, and that humans and computers will be integrated into a new transhuman whole. Kurzweill bases this idea on Moore’s Law, that says that processing capacity of computers doubles every 18 months. By extrapolation Kurzweill has calculated that computers surpass human capacities soon enough, and that we will thus see the rise of strong AI and transhuman beings.

On my main website I claim that within two decades we will see computer-controlled characters in games that are indistinguishable from human-controlled characters. I specifically claim this for games, as game worlds are rather limited. In my view, strong AI that works in the real world will take centuries to achieve, if we are able to achieve it at all. Thus, I seem to be in clear disagreement with Kurzweill.

True enough, I think that Kurzweill’s ideas are science fiction, fantasy, and a whole lot of wishful thinking. It is seriously misguided to believe that strong AI will arise during our lifetimes. Let me explain this with a metaphor.

Suppose that I want you to write a great novel, and I hand you a pencil and a sheet of paper. You tell me that you cannot write a great novel with a pencil and one sheet of paper. So I hand you another pencil and a second sheet of paper. You tell me that isn’t sufficient either. I now hand you two more pencils, a pencil sharpener, and ten more sheets of paper. Still not enough. And after having gone back and forth a couple of times, I have given you a whole box of pens and pencils in a rainbow of colors, several sharpeners, a stack of sheets a meter high, whiteout, some dictionaries, an encyclopaedia, and a bag of assorted writing paraphernalia. Now you have all the hardware that you could possibly need to write a great novel. Can you now write that novel?

Of course not. The hardware is a requirement, but not the most important ingredient for writing a novel. We know that a great novel can be written, because several great novels have been written in the past. But there is no recipe for writing a great novel. Sure, some forms of novels can be written without much creativity, but these will never be truly great.

In the same vein, we know that intelligence can exist because we can observe it all around us. We also have the capability to create programs that perform some specific tasks for which a very rudimentary form of intelligence is needed. And we know that building hardware that has the capacity of storing human-like intelligence might be doable. But having the hardware is only the first step for creating intelligence. And frankly, as we do not actually understand what intelligence is and how it comes about, we have no idea what the second step should be. We do not even know which problems we have to solve to create intelligence.

Obviously, a smart man such as Kurzweill who has studied the subject area knows all this. I can only assume that he makes his overblown claims because it helps him sell books and it works well in the talkshow circuit. It brings him fame and wealth, and he will not live long enough to be proven wrong.

Naturally, Kurzweill has been criticized by many scientists. But these do not get much attention from the media. That is not surprising. As a skeptic you can be sure that I will never be invited to Oprah.


Games and teaching

September 15, 2009

Last year I was invited by dr. Tomi Pasanen of Helsinki University to teach a one-week course on Artificial Intelligence for Computer Games to his third and fourth-year students. I went to the university three weeks ago, and met with 50 computer scienceĀ  students, whom I taught for a week on decision making, learning, and designing of video game characters. Every morning I lectured for two hours, and every afternoon and early evening the students did practical exercises.

Actually, the whole week consisted of one big practical exercise: the students had to design team AI for a team of seven characters in a role-playing game. The goal of the team was to occupy several important spaces in a virtual environment, which would generate points for the team as long as they would be able to hold them. Naturally, the team would have to fend off other teams with AI designed by their fellow students.

On the last day of the course we held a competition, in which we tried to determine the best team AI. Twenty-two teams were entered, and I saw some really impressive results. Some students had concentrated on team AI, some on individual character AI, and a few had even incorporated some opponent modelling. All in all, the strongest teams were those who had focussed a lot of their efforts on individual AI, which they could do because they were familiar with the game being used. However, the top teams needed more than just individual character AI, they had to incorporate strong team AI too.

What struck me was that I had 50 students who were willing to spend the last week of their holidays to enter a very intensive and sometimes quite tough course. Moreover, almost all students really spent the whole day working on the course and the practical. One of the reasons they were willing to do so was, probably, the subject matter of the course. Moreover, the competition element drove them to deliver their best performance.

Naturally, for a course on Artificial Intelligence in Games using games in a practical is the most logical choice. However, I think that games are also an excellent medium to use in many other courses. Take programming, for instance. Usually in programming courses students have to develop quite boring programs such as simple banking systems or personnel administrations. Why not let them develop a game? Any programming concept that exists can be found in games. But you can also think of ‘higher-level’ subjects, such asĀ  designing information systems, or human-computer interfacing, or artificial intelligence. Games can be easily used as the subject matter for those courses, too.

I know that students are already pampered quite a bit nowadays, so should we really pamper them even more by letting them work on fun stuff? I say the goal of all that pampering is to motivate them to work, and games are motivating. I saw this in Helsinki, but I have also seen it in other courses: if you give the students a game to work on, not much more stimulation is needed to get them to give their best.

There are colleges and universities who offer programs wherein students are educated to become game developers. Such programs attract students who dream to become game developers later in life. Unfortunately, there are not that many game developers needed, and what I have seen in the game industry is that they usually want to hire computer scientists, and not so much game developers.

Of course, the good game development schools make sure that their students are also able to become something different than a game developer. Such schools use games as a medium, not as a goal. Personally, I think that can be quite a smart move. It may lead to motivated students, who have a fun time and still learn a lot.

Later in life these students will discover that learning can be fun by itself. However, when learning has to compete with going out, drinking beer, and staying up late, teachers should go the extra mile to make the learning as entertaining as possible. And as long as the contents are covered, who cares what the medium is?