Almost everyone knows now that IBM Watson won a lot of money on Jeopardy, even though it never beat any human fair and square (see my previous post). But when Watson gave “Toronto” as the answer for a question in the “US Cities” category on “Jeopardy,” it told us a lot more than just that Watson could give an incorrect answer. It told us that Watson doesn’t even know how to make sense. But if my two-year-old daughter gave me the same answer for the same question in the same category, it would have made complete sense. Why? The reason is hidden in the follow-up question that was never asked: “Which country is Toronto located in?”. As “smart” as Watson is, it would have easily given us the correct answer: “Canada”. While my daughter would have asked, “Daddy, what does country mean?” Here lies Watson’s problem: even though both gave the incorrect answer, for Watson to know that Toronto is located in Canada but still give Toronto as an answer in the US Cities category showed that it does not make sense, while my daughter did her best to give an answer that made complete sense to her.
If you are still not convinced that Watson does not even know how to make sense, just think of the other mistake it made: repeating a wrong answer that Jennings gave just a second ago. Not only doesn’t Watson make sense, it didn’t understand anything other people said either. It beat humans on Jeopardy? Uh? I rest my case.
For an AI to stake any claim on intelligence, it not only has to give the right correct answers but also the right wrong answers. It must make sense with everything else it knows. Thus, the wrong answers are a lot more revealing. Isn’t this the least we should ask of any intelligent system?
By this minimal standard, we shouldn’t even be discussing if any system is intelligent before it can make sense. A simple enough thing to achieve, right? If so, then why hasn’t any AI system been able to consistently make sense? Maybe there is more to this than meets the eye? Think about it: as far as we know, every system that makes sense is intelligent and every system that does not make sense is not intelligent. We instinctively know this to be true. When something does not make sense, we always demand an explanation from other intelligent agents, but when we see some non-sense from today’s AIs, we just laugh.
Is it possible that intelligence is as simple as the ability to always make sense?