Are we computing for computings sake?

We have come a long way since the primitive systems of over half a century ago, and computing will likely evolve further, but we are at the point where it likely necessary to re-evaluate the way we do things. Ask the hard questions. Has OO really contributed anything? Why is there not one “uber-language” to rule them all? Are Fortran and Cobol still valid today? (and yes they are!) And why do we calculate numbers to such unfathomable depths? Are we now designing algorithms for the sake of designing algorithms? Everything seems to be being automated – is this even necessary?

Take Euler’s Number, e, which has a value something like 2.71828. There are websites that show it to 10,000 decimal places, but as it is an infinite number it could technically go on forever.  Likely someone has calculated it into the trillions of digits. But is any of this useful? No. It’s about as useful as baking the worlds largest pizza. Pi is even better, probably because it is more commonly used. In this article, “How Much Pi Do You Need?“, the author cites NASA using only 15-16 significant digits to keep the Space Station afloat, and the fundamental constants of the universe only require 32. This year Google announced one of its developer advocates, Emma Haruka Iwao, calculated π to 31.4 trillion digits. But what’s the point? It seems these sort of calculations have become some sort of computability race. Would it not be better to have people design algorithms for systems that could automatically clean up all the crappy plastic we throw in the oceans?

Look, I get it. People love to calculate π. But it isn’t necessary. It isn’t even meaningful. Look Persian astronomer and mathematician Jamshīd al-Kāshī calculated π to the equivalent of 16 decimal places in 1424. By 1596, Dutch mathematician Ludolph van Ceulen reached 20 digits and later 35. All without computers. So we don’t need to do these things? Unless some perceive these calculations as the intellectual equivalent of base-jumping? People also use it to test the speed of their supercomputers, which is fine, but you could use Ackermann’s function for that – I mean that’s really it’s only purpose. Maybe we do this because we have reached an algorithmic plateau?

 

Why designing algorithms is hard

The hardest part of developing software is of course the algorithms. People often  think that it’s possible to write a program to do just about anything – but that is just not the case. There are things that humans can do that machines just can’t, and likely never will be able to do. Algorithms are hard because humans don’t think in the same black-and-white manner as machines do. Writing an algorithm to filter an image in a way similar to an Instagram filter is something the human mind cannot do, but interpreting the aesthetics of the filtered image is not something a machine can do. Nor should it. It is too hard to try and frame the complexities of the human mind in a series of steps which can be translated into a program. Even weather predicting models can be flummoxed by the fact that weather is unpredictable, and can change. Anything with some level of randomness in it is more challenging to pin-down, and hence more challenging to write algorithms for.

We take algorithms for granted because we think that  things that are inherently simple for humans, should be just as easy for machines. Millions of years of evolution have provided us with eyes are able to process visual imagery in a spectacular fashion, yet translating this to algorithmic form for the machine to mimic is almost impossible. Machines are able to easily store imagery, and manipulate it in ways the human visual system cannot, and yet they are not able to instinctively identify objects in a generic manner – although they have made inroads. A human is able to identify a tree species from a distance, a machine is not. Being able to write programs is one facet of software development, but being able to decipher the logic underpinning those programs relies on individuals who can realistically think outside the box, push the envelope so to speak. Look at sorting algorithms. When was the last time a really effective, new sorting algorithm was developed? Most “new” sorting algorithms are usually an extension of an existing algorithm, wringing out a few extra milliseconds of speed. But truly new, transformative sorting algorithms? They just haven’t appeared.