Ethical cars

The first completely autonomous machines that will invade society as a whole might very well be self-driving cars. With “completely autonomous” I mean that these cars will perform their duties without any interaction with their owners, making their own decisions. Obviously, there is great commercial value in such transportation devices. However, allowing them to take responsibility for their own actions in the real world may involve considerable risk. For how can we be ensured that the decisions of these cars are in alignment with what we humans find morally acceptable?

A typical scenario that I get confronted with, is a self-driving car which has to swerve in order to avoid hitting a dog, but if it does that, as a consequence, hits a human. While obviously we would prefer the car to avoid hitting both dogs and humans, if there is no choice but to hit one of them, we would like the car to then choose to hit the dog. A potential solution to this scenario would be to outfit the car with ethical rules along the lines of Isaac Asimov’s three laws of robotics, e.g., with a rule that says “do not harm humans” given priority over a rule that says “do not harm dogs.”

However, the specification of such rules is not a trivial matter. For instance, it is logical that a rule would state “you have to obey the laws of traffic.” This would entail that the car is not allowed to drive through a red light. But what if the car stops for a red light, while a traffic warden motions it to continue driving? You may update the rule stating that an exception is made for directions given by traffic wardens. But what if there is no traffic warden, the car has stopped for a red light, and a police car sounding its horn is coming from behind and cannot get past unless the car drives forward a bit (through the red light) to park to the side? You may update the rule even more to take that situation into account, but is this then covering each and every situation in which the car is allowed to break the rule that it should stop for a red light? Probably not.

The point is that human drivers ever so often break the rules of traffic to avoid a problematic situation. You are trying to pass another car which drives fairly slowly, and suddenly that car speeds up. You can still get past, but you have to drive faster than the speed limit for a few moments. So that’s what you do. Or you are at a crossing in a deadlock with two or three other cars. One of them has to break the rules and start moving, otherwise they will all be stuck there forever.

The point is that human drivers improvise all the time. They know the traffic rules, they have been trained to recognize safe and dangerous situations, and know how to anticipate on the behavior of other drivers. And sometimes they bend or break the rules to avoid problems. A self-driving car that cannot improvise is dangerous. However, a consequence of the need for improvisation is that any rules that we would want to impose on the car, it should be able to break. The only alternative would be to envision each and every situation in which the car could find itself and specify the exact behavioral rules for it to deal with all those situations. Clearly, that is impossible.

So how do we get a car to behave like a responsible driver without laying down an endless list of rules? The answer is: by training it. First, we let the car drive in a highly realistic simulation, punishing it every time that it causes a situation that is undesirable, and rewarding it when it manages to perform well. A learning structure can incorporate the lessons that the car learns, thereby bringing it ever closer to being a model driver. Once it is perfect or almost perfect in the driving simulation, it can be let loose on the road under the guidance of a human, continuing learning. In the end, it will behave on the road as well as, and probably a lot better than, a good human driver.

How will such a car deal with a choice between hitting a human or a dog? It is likely that similar situations will have cropped up during the training process — maybe not with exactly the same race of dog and the same human as in the real situation, but as the car has been trained instead of having been given specific rules, it has the ability to generalize, and it will make the choice that is closest to what the training would have rewarded while avoiding choices that the training most likely would have punished. In other words, it will choose to hit the dog to avoid hitting the human, just as it would likely hit a cat, a moose, a badger, or a duck in order to avoid hitting a human.

It might, however, in a situation where someone pushes a mannequin in the road, hit a dog to avoid hitting the mannequin — not because it thinks the mannequin is a human, but because the situation of hitting the mannequin more closely resembles hitting a human than the situation of hitting a dog does. If we do not want the car to make that choice, we should ensure that its training regime includes situations in which it has to deal with objects that resemble humans but are not humans. This, however, could lead to a situation in which it chooses to hit a highly inert human to avoid hitting a dog. That’s the problem with allowing a car to make its own choices based on how it is trained: you can probably always find an exceptional situation in which it is not doing what we hoped it would do. The same is true for humans, of course, and in the end the self-driving car will probably still be a much safer driver than any human.

So if one wonders how we can be sure that the ethics of a self-driving car will be acceptable to us humans, the answer is that we can only draw conclusions based on observations of how the car deals with tough situations. We will not be able to open up the car’s brain and examine some kind of ethics module to read how it will deal with situations that come up. Therefore there is no way for us to be “sure.”

We can only draw comfort from the fact that if at some point the car takes a decision that we find doubtful, we can punish it and it is likely to make a different decision when a similar situation comes up again. It will be less stubborn than the average human in that respect.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: