Consider a question such as this:
What happens to a glass of water if a person lets go of it?
A human might answer this by asking a series of questions:
- Is the glass on a stable surface?
- Is the glass being held in space?
- Is the glass tempered?
- How high is the glass from the surface?
- Is there liquid in the glass, and if so, how much?
- What is the shape of the glass?
- What sort of surface sits below the glass?
From these questions, one can determine whether or not the glass will drop, and if it will break. If the answer to Q1 is true, then nothing will happen – the glass will just sit there. If Q2 is true, then letting go of it will cause it to fall. IF Q3 is true, then there is a chance the glass will survive its impact with the surface below. The extent of the impact will depend on the height specified in Q4 – 1″ will not cause any problems, 4 feet will cause a greater impact. Q5 will only really impact how large the mess will be, although it could impact how the glass travels as it is falling, but that also depends on the shape specified in Q6. Finally Q7 determines the type of impact. If the glass falls on grass, or water, the outcome will be vastly different from a tile or concrete surface.
Now consider the number of questions involved in determining what happens when the glass of water is let go. A human can infer what happens to the glass after seeking answers to the questions. But how does an AI know what happens? Most AI relies on the use of factual information, and given all the relevant information, it would not be hard for an AI to come up with an appropriate answer. But questions like this do not contain factual information, nor can it be assumed that factual information exists. Can the AI’s architecture extrapolate from the original question to formulate questions of it’s own?
AI is limited by the factual information we give it. An intelligent thermostat may work by using information on our location, or how active we are near the thermostat, but it won’t ask you how warm or cold you feel.