Ethical cars

June 28, 2017

The first completely autonomous machines that will invade society as a whole might very well be self-driving cars. With “completely autonomous” I mean that these cars will perform their duties without any interaction with their owners, making their own decisions. Obviously, there is great commercial value in such transportation devices. However, allowing them to take responsibility for their own actions in the real world may involve considerable risk. For how can we be ensured that the decisions of these cars are in alignment with what we humans find morally acceptable?

A typical scenario that I get confronted with, is a self-driving car which has to swerve in order to avoid hitting a dog, but if it does that, as a consequence, hits a human. While obviously we would prefer the car to avoid hitting both dogs and humans, if there is no choice but to hit one of them, we would like the car to then choose to hit the dog. A potential solution to this scenario would be to outfit the car with ethical rules along the lines of Isaac Asimov’s three laws of robotics, e.g., with a rule that says “do not harm humans” given priority over a rule that says “do not harm dogs.”

However, the specification of such rules is not a trivial matter. For instance, it is logical that a rule would state “you have to obey the laws of traffic.” This would entail that the car is not allowed to drive through a red light. But what if the car stops for a red light, while a traffic warden motions it to continue driving? You may update the rule stating that an exception is made for directions given by traffic wardens. But what if there is no traffic warden, the car has stopped for a red light, and a police car sounding its horn is coming from behind and cannot get past unless the car drives forward a bit (through the red light) to park to the side? You may update the rule even more to take that situation into account, but is this then covering each and every situation in which the car is allowed to break the rule that it should stop for a red light? Probably not.

The point is that human drivers ever so often break the rules of traffic to avoid a problematic situation. You are trying to pass another car which drives fairly slowly, and suddenly that car speeds up. You can still get past, but you have to drive faster than the speed limit for a few moments. So that’s what you do. Or you are at a crossing in a deadlock with two or three other cars. One of them has to break the rules and start moving, otherwise they will all be stuck there forever.

The point is that human drivers improvise all the time. They know the traffic rules, they have been trained to recognize safe and dangerous situations, and know how to anticipate on the behavior of other drivers. And sometimes they bend or break the rules to avoid problems. A self-driving car that cannot improvise is dangerous. However, a consequence of the need for improvisation is that any rules that we would want to impose on the car, it should be able to break. The only alternative would be to envision each and every situation in which the car could find itself and specify the exact behavioral rules for it to deal with all those situations. Clearly, that is impossible.

So how do we get a car to behave like a responsible driver without laying down an endless list of rules? The answer is: by training it. First, we let the car drive in a highly realistic simulation, punishing it every time that it causes a situation that is undesirable, and rewarding it when it manages to perform well. A learning structure can incorporate the lessons that the car learns, thereby bringing it ever closer to being a model driver. Once it is perfect or almost perfect in the driving simulation, it can be let loose on the road under the guidance of a human, continuing learning. In the end, it will behave on the road as well as, and probably a lot better than, a good human driver.

How will such a car deal with a choice between hitting a human or a dog? It is likely that similar situations will have cropped up during the training process — maybe not with exactly the same race of dog and the same human as in the real situation, but as the car has been trained instead of having been given specific rules, it has the ability to generalize, and it will make the choice that is closest to what the training would have rewarded while avoiding choices that the training most likely would have punished. In other words, it will choose to hit the dog to avoid hitting the human, just as it would likely hit a cat, a moose, a badger, or a duck in order to avoid hitting a human.

It might, however, in a situation where someone pushes a mannequin in the road, hit a dog to avoid hitting the mannequin — not because it thinks the mannequin is a human, but because the situation of hitting the mannequin more closely resembles hitting a human than the situation of hitting a dog does. If we do not want the car to make that choice, we should ensure that its training regime includes situations in which it has to deal with objects that resemble humans but are not humans. This, however, could lead to a situation in which it chooses to hit a highly inert human to avoid hitting a dog. That’s the problem with allowing a car to make its own choices based on how it is trained: you can probably always find an exceptional situation in which it is not doing what we hoped it would do. The same is true for humans, of course, and in the end the self-driving car will probably still be a much safer driver than any human.

So if one wonders how we can be sure that the ethics of a self-driving car will be acceptable to us humans, the answer is that we can only draw conclusions based on observations of how the car deals with tough situations. We will not be able to open up the car’s brain and examine some kind of ethics module to read how it will deal with situations that come up. Therefore there is no way for us to be “sure.”

We can only draw comfort from the fact that if at some point the car takes a decision that we find doubtful, we can punish it and it is likely to make a different decision when a similar situation comes up again. It will be less stubborn than the average human in that respect.


The digital overlords are here

May 6, 2016

I read a very nice statement of prof. Pedro Domingos of the University of Washington in the Dutch newspaper NRC of May 4, 2016. In answer to the question “What do you tell people who are afraid that self-learning computers are getting so smart that they will take over the world?” he said: “Computers are stupid and they already took over the world. It would be better if they would be smarter.” That’s going to be my stock answer to this question from now on.


Artificially stupid ducks

June 16, 2014

“The Eugene Goostman chatbot passed the Turing test. So now we finally have real artificial intelligence.”

That is what was reported recently by many news outlets. Of course, it is horribly wrong.

A chatbot is not intelligent. A chatbot has no understanding of what it says. A chatbot simply delves into a database of previously stored sentences (usually automatically retrieved from the Internet), and loosely links them to what the person who is testing the bot is typing. It uses non sequiturs instead of actual answers, repeats a persons statements back at him, and switches from topic to topic without rhyme or reason.

The authors of Eugene Goostman gave their bot the backstory that it was a 13-year old boy from the Ukraine, whose native language was not English. The fact that he was supposed to be a foreigner was introduced to make members of the jury more forgiving of the irrational answers that the bot provided. The fact that he was supposed to be 13 years old was introduced to make members of the jury more forgiving of the nonsensical switching of topics and general lack of knowledge and understanding. If that cheap trick is considered acceptable, then we have had artificial intelligence for many years now.

I mean, I have a program wherein you can type any text that you want, and it will never respond. As such, it functions nicely as a replica of an autistic person. It would also be relatively easy to create a program that resembles a heavy sufferer of Tourettes.

But even if the authors had not coined up this backstory, and were still able to fool 10 out of 30 judges — would we then have to conclude that Eugene Goostman is ‘real’ artificial intelligence? Would Alan Turing conclude that?

The answer is “no”. The Turing Test is one of the most misrepresented tests in the history of science. It is not a litmus test for artificial intelligence. It is merely an illustration of a philosophical stance that Alan Turing took.

The issue is as follows: how can we know whether a computer is intelligent or not? When Turing was alive, this topic was hotly debated amongst computer scientists and philosophers. Some claimed that a computer can never be ‘really intelligent’, as you can examine its programs and databases and (theoretically) derive exactly how it produces its answers. The counter-argument is that you can also open up a person’s brain and (theoretically) derive exactly how that person produces his answers. So what features would you want a computer to have, which allow you to unequivocally state that it is ‘really’ intelligent?

Alan Turing’s answer was: it is not important what is inside the computer; what is important is its behavior. If a computer’s behavior is indistinguishable from an intelligence, we should conclude that it is intelligent. Even if we could open up the computer, look inside, and point out some features that make us say: “You see that? That is how that intelligent behavior is generated!” that would only teach us something about how intelligence comes about, and would not invalidate the computer as an intelligent being (unless we open up the computer and see a human inside who provides all the answers, of course).

The Turing Test is only an illustration of Turing’s philosophical principle. He says that if a computer can converse so well that you cannot distinguish it from a human, then the computer converses as well as a human, and thus converses intelligently. There is no stipulation like ‘conversing for only 5 minutes’ or ‘the computer is allowed to limit the topics’ or ‘the computer should be forgiven for bad English’. Such stipulations would make no sense, because an intelligent conversation should demonstrate an understanding of the world. A chatbot that does not at least encompass a model of the world can never demonstrate an understanding. Simply reflecting sentences that you pick off the Internet might fool some uninitiated people for a while (that is not too hard, ELIZA managed to fool Joseph Weizenbaum’s secretary in 1964), but it will fool nobody for longer stretches of time.

The whole point is that Turing wanted to introduce the Duck Test for artificial intelligence — if it looks, swims, and quacks like a duck, you should conclude that it is a duck. We now know that it is not hard to fool a couple of people for 5 minutes into thinking that just maybe that computer over there is actually a human. We can do that due to the enormous speed that computers have achieved in processing data, and the huge storage capacity that modern computers have. But despite the fact that, by itself, it is not an easy task to make people think that a computer is conversing like a 13-year old Ukrainian boy, succeeding at that task is not the same as succeeding at creating an artificial intelligence.

As written, the Turing Test is not a test of artificial intelligence. Turing’s principle, however, stands: the Duck Test is the only viable way of determining whether a computer is really intelligent. However, we should realize that the duck itself is much bigger and much more complex than Turing’s original illustration sketches.


Talkshow science

April 12, 2011

Today I was interviewed by a group of students on the future of artificial intelligence. I am not an expert on that subject by any means, but this was for a course and as I have some ideas in this area I was happy to help them out.

A large part of the interview was on Ray Kurzweill’s claims that strong artificial intelligence will be achieved within a few decades, and that humans and computers will be integrated into a new transhuman whole. Kurzweill bases this idea on Moore’s Law, that says that processing capacity of computers doubles every 18 months. By extrapolation Kurzweill has calculated that computers surpass human capacities soon enough, and that we will thus see the rise of strong AI and transhuman beings.

On my main website I claim that within two decades we will see computer-controlled characters in games that are indistinguishable from human-controlled characters. I specifically claim this for games, as game worlds are rather limited. In my view, strong AI that works in the real world will take centuries to achieve, if we are able to achieve it at all. Thus, I seem to be in clear disagreement with Kurzweill.

True enough, I think that Kurzweill’s ideas are science fiction, fantasy, and a whole lot of wishful thinking. It is seriously misguided to believe that strong AI will arise during our lifetimes. Let me explain this with a metaphor.

Suppose that I want you to write a great novel, and I hand you a pencil and a sheet of paper. You tell me that you cannot write a great novel with a pencil and one sheet of paper. So I hand you another pencil and a second sheet of paper. You tell me that isn’t sufficient either. I now hand you two more pencils, a pencil sharpener, and ten more sheets of paper. Still not enough. And after having gone back and forth a couple of times, I have given you a whole box of pens and pencils in a rainbow of colors, several sharpeners, a stack of sheets a meter high, whiteout, some dictionaries, an encyclopaedia, and a bag of assorted writing paraphernalia. Now you have all the hardware that you could possibly need to write a great novel. Can you now write that novel?

Of course not. The hardware is a requirement, but not the most important ingredient for writing a novel. We know that a great novel can be written, because several great novels have been written in the past. But there is no recipe for writing a great novel. Sure, some forms of novels can be written without much creativity, but these will never be truly great.

In the same vein, we know that intelligence can exist because we can observe it all around us. We also have the capability to create programs that perform some specific tasks for which a very rudimentary form of intelligence is needed. And we know that building hardware that has the capacity of storing human-like intelligence might be doable. But having the hardware is only the first step for creating intelligence. And frankly, as we do not actually understand what intelligence is and how it comes about, we have no idea what the second step should be. We do not even know which problems we have to solve to create intelligence.

Obviously, a smart man such as Kurzweill who has studied the subject area knows all this. I can only assume that he makes his overblown claims because it helps him sell books and it works well in the talkshow circuit. It brings him fame and wealth, and he will not live long enough to be proven wrong.

Naturally, Kurzweill has been criticized by many scientists. But these do not get much attention from the media. That is not surprising. As a skeptic you can be sure that I will never be invited to Oprah.


Games and teaching

September 15, 2009

Last year I was invited by dr. Tomi Pasanen of Helsinki University to teach a one-week course on Artificial Intelligence for Computer Games to his third and fourth-year students. I went to the university three weeks ago, and met with 50 computer scienceĀ  students, whom I taught for a week on decision making, learning, and designing of video game characters. Every morning I lectured for two hours, and every afternoon and early evening the students did practical exercises.

Actually, the whole week consisted of one big practical exercise: the students had to design team AI for a team of seven characters in a role-playing game. The goal of the team was to occupy several important spaces in a virtual environment, which would generate points for the team as long as they would be able to hold them. Naturally, the team would have to fend off other teams with AI designed by their fellow students.

On the last day of the course we held a competition, in which we tried to determine the best team AI. Twenty-two teams were entered, and I saw some really impressive results. Some students had concentrated on team AI, some on individual character AI, and a few had even incorporated some opponent modelling. All in all, the strongest teams were those who had focussed a lot of their efforts on individual AI, which they could do because they were familiar with the game being used. However, the top teams needed more than just individual character AI, they had to incorporate strong team AI too.

What struck me was that I had 50 students who were willing to spend the last week of their holidays to enter a very intensive and sometimes quite tough course. Moreover, almost all students really spent the whole day working on the course and the practical. One of the reasons they were willing to do so was, probably, the subject matter of the course. Moreover, the competition element drove them to deliver their best performance.

Naturally, for a course on Artificial Intelligence in Games using games in a practical is the most logical choice. However, I think that games are also an excellent medium to use in many other courses. Take programming, for instance. Usually in programming courses students have to develop quite boring programs such as simple banking systems or personnel administrations. Why not let them develop a game? Any programming concept that exists can be found in games. But you can also think of ‘higher-level’ subjects, such asĀ  designing information systems, or human-computer interfacing, or artificial intelligence. Games can be easily used as the subject matter for those courses, too.

I know that students are already pampered quite a bit nowadays, so should we really pamper them even more by letting them work on fun stuff? I say the goal of all that pampering is to motivate them to work, and games are motivating. I saw this in Helsinki, but I have also seen it in other courses: if you give the students a game to work on, not much more stimulation is needed to get them to give their best.

There are colleges and universities who offer programs wherein students are educated to become game developers. Such programs attract students who dream to become game developers later in life. Unfortunately, there are not that many game developers needed, and what I have seen in the game industry is that they usually want to hire computer scientists, and not so much game developers.

Of course, the good game development schools make sure that their students are also able to become something different than a game developer. Such schools use games as a medium, not as a goal. Personally, I think that can be quite a smart move. It may lead to motivated students, who have a fun time and still learn a lot.

Later in life these students will discover that learning can be fun by itself. However, when learning has to compete with going out, drinking beer, and staying up late, teachers should go the extra mile to make the learning as entertaining as possible. And as long as the contents are covered, who cares what the medium is?