# Thinking considered superfluous (part 1)

“…even the most sophisticated computer would produce nothing of any value if someone had not programmed it, if someone did not maintain it, and if someone did not look after the machinery that provides the electricity on which it is to run.”
Hywel Murrell, 1976

Humans think. Computers follow instructions. In 1949, Popular Science ran an article in which it was postulated that “a machine will give a definitive answer to any question that can be expressed mathematically”. So no, you couldn’t ask it whether a flower is beautiful, as it doesn’t comprehend this type of question. And you still can’t.

Sometimes the human mind perceives that there is more information in an image than there actually is. It is almost as if missing information is coalesced with existing information. Algorithms don’t have this ability. Consider the license plate shown in Figure 1. It is clear from this 42×74 pixel image that the license plate number is 4GFH133. Deriving an automated algorithm to achieve this much more difficult.

Fig.1. The fuzzy gray, low-resolution license plate

From the grayscale image in Fig.1, it is still possible for a human to derive the license plate number. A computer will find it more challenging. This is because what the computer sees are a bunch of pixels with values, intricate detail that the human eye does not see. Humans use the depth perception in the grayscale image to interpret the characters. Computers need instructions with fine granularity. But humans are also capable of interpreting the picture as a whole. A computer must find some association between individual pixels, to even attempt this.

For an algorithm to work, one first has to derive a grayscale image, extract the license plate, threshold the image (turn it into a binary image), and then extract the characters. For example, the binary image of Fig.2 shows blobs of characters in black. The association is an interconnected region of pixels labelled with 0. Now an algorithm to extract the characters must now clean up the characters by removing spurious pixels, then attempt to perform pattern recognition using a priori knowledge of the shape of characters. Suffice to say, some characters will be misinterpreted.

It is challenging to translate the workings of the human mind into an algorithm. There are some things that humans do very well, better than most “artificial intelligence” algorithms; conversely there are other functions at which humans are rather poorer, and should be given over to computers. Below is a list modified from Murrell[1]:

 Human sensing stimuli improvisation flexibility perception of space, depth, pattern extrapolation and prediction translation inductive reasoning making complex decisions homeostasis Machine computing response at great speed precision precise repetition short-term data storage deductive reasoning complex simultaneous functions simple yes/no decisions

One of the more useful human characteristics is flexibility. A machine works within some pre-programmed constraints, whereas humans can change their roles rapidly and frequently. Of equal importance is the fact that humans act as a sensing device, acquiring stimuli from a wide range of sources simultaneously and bringing this data together to form a complete picture. Indeed, it may be difficult, if not impossible to design an algorithm to cover the complete range of eventualities which could be covered by a human. The complexity of thought is often not realized until attempts are made to discover how conclusions are made based on the information at hand. A case in point is describing the scene within a photograph. A human can describe the scene in detail with minimal processing. A computer requires multiple levels of algorithms to interpret the data and attempt to draw conclusions. In certain applications, such as mail sorting, algorithms are adept enough to decipher most forms of machine and handwritten postal codes, and sort the mail accordingly, at speeds up to 60,000 letters per hour. Conversely tasks like grading cheese are inherently tied to a humans ability to judge pressure and olfactory skills. Can a computer determine the ripeness of a melon through smell? Or model the complexities of taste?

[1] Murrell, K.F.H., Ergonomics: Man in his Working Environment, Chapman&Hall, 1965.
[2]
Harper, R. ,”Psychological and psycho-physical studies of craftsmanship in dairying”, British J. Psychol. Monogr. Suppl. 28, pp.24. (1952)