Today I was interviewed by a group of students on the future of artificial intelligence. I am not an expert on that subject by any means, but this was for a course and as I have some ideas in this area I was happy to help them out.
A large part of the interview was on Ray Kurzweill’s claims that strong artificial intelligence will be achieved within a few decades, and that humans and computers will be integrated into a new transhuman whole. Kurzweill bases this idea on Moore’s Law, that says that processing capacity of computers doubles every 18 months. By extrapolation Kurzweill has calculated that computers surpass human capacities soon enough, and that we will thus see the rise of strong AI and transhuman beings.
On my main website I claim that within two decades we will see computer-controlled characters in games that are indistinguishable from human-controlled characters. I specifically claim this for games, as game worlds are rather limited. In my view, strong AI that works in the real world will take centuries to achieve, if we are able to achieve it at all. Thus, I seem to be in clear disagreement with Kurzweill.
True enough, I think that Kurzweill’s ideas are science fiction, fantasy, and a whole lot of wishful thinking. It is seriously misguided to believe that strong AI will arise during our lifetimes. Let me explain this with a metaphor.
Suppose that I want you to write a great novel, and I hand you a pencil and a sheet of paper. You tell me that you cannot write a great novel with a pencil and one sheet of paper. So I hand you another pencil and a second sheet of paper. You tell me that isn’t sufficient either. I now hand you two more pencils, a pencil sharpener, and ten more sheets of paper. Still not enough. And after having gone back and forth a couple of times, I have given you a whole box of pens and pencils in a rainbow of colors, several sharpeners, a stack of sheets a meter high, whiteout, some dictionaries, an encyclopaedia, and a bag of assorted writing paraphernalia. Now you have all the hardware that you could possibly need to write a great novel. Can you now write that novel?
Of course not. The hardware is a requirement, but not the most important ingredient for writing a novel. We know that a great novel can be written, because several great novels have been written in the past. But there is no recipe for writing a great novel. Sure, some forms of novels can be written without much creativity, but these will never be truly great.
In the same vein, we know that intelligence can exist because we can observe it all around us. We also have the capability to create programs that perform some specific tasks for which a very rudimentary form of intelligence is needed. And we know that building hardware that has the capacity of storing human-like intelligence might be doable. But having the hardware is only the first step for creating intelligence. And frankly, as we do not actually understand what intelligence is and how it comes about, we have no idea what the second step should be. We do not even know which problems we have to solve to create intelligence.
Obviously, a smart man such as Kurzweill who has studied the subject area knows all this. I can only assume that he makes his overblown claims because it helps him sell books and it works well in the talkshow circuit. It brings him fame and wealth, and he will not live long enough to be proven wrong.
Naturally, Kurzweill has been criticized by many scientists. But these do not get much attention from the media. That is not surprising. As a skeptic you can be sure that I will never be invited to Oprah.