We have come a long way since the primitive systems of over half a century ago, and computing will likely evolve further, but we are at the point where it likely necessary to re-evaluate the way we do things. Ask the hard questions. Has OO really contributed anything? Why is there not one “uber-language” to rule them all? Are Fortran and Cobol still valid today? (and yes they are!) And why do we calculate numbers to such unfathomable depths? Are we now designing algorithms for the sake of designing algorithms? Everything seems to be being automated – is this even necessary?
Take Euler’s Number, e, which has a value something like 2.71828. There are websites that show it to 10,000 decimal places, but as it is an infinite number it could technically go on forever. Likely someone has calculated it into the trillions of digits. But is any of this useful? No. It’s about as useful as baking the worlds largest pizza. Pi is even better, probably because it is more commonly used. In this article, “How Much Pi Do You Need?“, the author cites NASA using only 15-16 significant digits to keep the Space Station afloat, and the fundamental constants of the universe only require 32. This year Google announced one of its developer advocates, Emma Haruka Iwao, calculated π to 31.4 trillion digits. But what’s the point? It seems these sort of calculations have become some sort of computability race. Would it not be better to have people design algorithms for systems that could automatically clean up all the crappy plastic we throw in the oceans?
Look, I get it. People love to calculate π. But it isn’t necessary. It isn’t even meaningful. Look Persian astronomer and mathematician Jamshīd al-Kāshī calculated π to the equivalent of 16 decimal places in 1424. By 1596, Dutch mathematician Ludolph van Ceulen reached 20 digits and later 35. All without computers. So we don’t need to do these things? Unless some perceive these calculations as the intellectual equivalent of base-jumping? People also use it to test the speed of their supercomputers, which is fine, but you could use Ackermann’s function for that – I mean that’s really it’s only purpose. Maybe we do this because we have reached an algorithmic plateau?