Early programs were anything but stylish. When programming languages crawled out of the primordial muck that was machine level language, they took on the guise of more English-like entities – often constricted to starting in certain columns, such they they were effectively left justified and tabbed in anywhere from 7-10 spaces. This added to the fact that they were written in UPPERCASE, and contained masses of non-structured jumps resulted in a less than pleasing entity. Better than machine code, but just barely.
Why was uppercase used in early languages? You can blame punch cards (see previous post on punch cards). Basically most punch cards used the EBCD character set, which allowed for 64 characters/symbols to be used on a punch card – all uppercase, and no lowercase. However, for multiword text, as is often found in language statements, UPPERCASE IS MORE DIFFICULT TO READ. For example, this piece of Cobol code:
MOVE ZEROES TO NO-OF-SENTENCES, NO-OF-WORDS, NO-OF-CHARACTERS.
Would read better as:
move zeroes to no-of-sentences, no-of-words, no-of-characters.
This is because lowercase text typically offers a greater variety of word shapes. Word shape influences how humans interpret words. This variety conveys sensory information at lower spatial frequencies that can be used to discern some aspects of word meaning in parallel with the high spatial frequency analysis of the individual letters. UPPERCASE therefore reduces the readability of a program, e.g. TEMPERATURE, versus Temperature.
Ultimately it would be even better if the variables int he example were improved as well. For some variable names, the use of some form of CamelCase is likely better than pure lowercase. This is likely because of the wider visual angle the variable presents. Here we have shortened the identifiers, using CamelCase, and abbreviations.
move zeroes to numSent, numWords, numChar.