Very regularly, you hear self-proclaimed AI expert make statements like “within 5 years, we will have…” followed by an indication of a technology which requires semi-intelligent processing (like self-driving cars, computer-authored novels, or brain-computer uploading). This is how one should translate such statements:
If AI experts say:
- “within 5 years”: they mean “we already have this technology, it is just not widespread yet.”
- “within 10 years”: they mean “we do not have this technology yet, but we know how to solve the problems that still need solving.”
- “within 25 years”: they mean “we need to overcome numerous problems to get this technology and we do not know if they are solvable at all, but we are convinced that theoretically these problems should be solvable by the very smart young people that are currently entering the field.”
- “within 50 years”: they mean “maybe this technology will be developed and maybe not, and even if it will be, it may take 50 years, or 500, or 5000, and humanity is likely to have destroyed itself before that time, but making these promises will get me a lot of media attention so I make them anyway, and 50 years is long enough for me to have passed away before anyone can tell me that I am wrong.”
The moral of the story is: most self-proclaimed AI experts love to make promises about technology which will be developed, but the less we know about how to solve the problems associated with said technology, the further in the future they will place the technology. You should realize that if a problem has not been solved yet, in principle it is impossible to say when it will be solved — because you can only say that when the problem has been solved already.