Consider the following statement:
“Every defined intellectual operation will be performed by a computer, faster, better, and more reliably than by a human being.” (Edmund C. Berkeley, 1980)
The computers of today are still high-speed morons. Some are capable of limited though, but only because they have been instructed to learn. If we write a program to teach a computer the characteristics of a human face found in an image, and we show it a multitude of examples, essentially training it, it will likely be able to track people on a security camera feed. The more examples of faces we give it, the better it will learn. There is essentially a “Where’s Waldo” of facial recognition, because the principles are the same. Computers are no doubt fast, but given sufficient time, humans are capable of solving most problems. In 1853, William Rutherford (1798-1871) calculated π to 440 digits. 147 years later, computers are capable of calculating the value of π to 5,000,000,000,000 decimal digits in a mere 90 days. In reality though, only 39 digits are needed to make a circle the size of an observable universe accurate to the size of a hydrogen atom, so are we designing such algorithms for the mere sake of computing?