I don’t like AI much. Every second news article in the CS community these days seems to be about AI, and how it will “revolutionize the world”. Will it? Do we need the world revolutionized? I mean we’ve had over a century of far-reaching changes, and frankly not all of them have worked out well (atomic energy anyone?). Then there are things we tried to do, but then just ended up procrastinating. Humans landed on the moon for the first time in 1969, and for the last time in 1972. In over 40 years we haven’t been back. Now we have forsaken the moon for a bigger prize – Mars (and beyond). Sound familiar? Humans have a habit of believing that incredible inventions will make life better. In the 1950s it was atomic power. A 1955 article from Modern Mechanix, “Atomic Planes Are Closer Than You Think“, discussed the virtue of atomic planes, and predicted “high-payload atom-powered jet flying-boats within the next five to ten years”. It never happened, and for that we can likely be thankful. I mean they were talking about atomic trains and cars as well. Stupid. Do we really think AI will work out well? Likely not so much. I mean we are still running programs written in languages that are nearing 60 years of age. Computing has not reached maturity, and we are talking about trying to replicate the human mind? We can’t even predict the weather properly. The problem with AI is that it lacks the ability to understand much less answer the questions we want it to answer. Sure it can answer things like “What was the capital city of Denmark in 1692?”, but it’s the deeper questions it may never be able to answer. For example:
- “I’m thinking of vacationing in Sweden this year, what do you think?”
- “What do you think makes this game pie taste so good?”
- “Which flower smells the best?”
- Who’s the funniest comedian?
Answering Q1 requires knowledge of having visited Sweden – not just information based on data retrieved from things like customer reviews. Sure, an AI could tell you what the most popular tourist attractions are, when it is best to visit, what sort of food could be eaten, but it can’t tell you what it experienced… because it never experienced anything. Answering Q2 relies in part knowing what the ingredients are and how the game pie is cooked. But it ultimately relies on a sense of taste (and smell). Computers can’t taste – salt, sweet, sour, bitter, and umami, and as such they could never answer such a question. Question 3 has similar problems. Yes, there have been breakthroughs in creating olfactory sensors, but an AI could never answer which flower smells the best, because there is no answer to this – it is a very subjective question.
Finally Q4. Do computers understand humour? No. Again, it’s subjective. Some people find a comedian funny, others don’t. AI’s may be good at answering questions for which there is a definitive answer, or searching through vast amounts of data to find something, tasks humans would not do as efficiently, or as fast. But answering questions that requires subjective opinion. Not likely. Besides which humans have other non-verbal characteristics that help observers make decisions. You can tell if someone doesn’t like a game pie, just by their body language… even if they say they do. An AI might be good at answering questions like this:
- What is the best route from Montreal to Ottawa?
- Where can I find Danish Esrom cheese in Toronto?
- Where is Waldo?
These questions are capable of being answered because they rely on data being analyzed. Of course even Q2 might be challenging, because it’s not as easy as just finding a cheese shop. Maybe there are no cheese shops in Toronto which carry Danish cheese, or the data on what cheeses a shop carries may not be available. Nothing is as simple as it seems.
Maybe, just maybe we should concentrate on doing what we curretly do properly before moving on to “shinier things”.