Why did the semicolon appear in programming? The dictionary declares a semicolon as:
“indicating a pause, typically between two main clauses”
So it makes sense that it was used to separate lines in programs, especially once formatting got over the whole “one statement per line” coding convention. In English the semicolon plays the role of a separator, NOT a terminator… that’s the job of the period (or question mark?, quotation mark!). For instance:
There is mounting evidence of climate change; of course some people will never believe it.
The use of the semicolon may have started in Algol, where like its successor Pascal, the semicolons acted as separators, between statements. Fortran used carriage returns, and Cobol used the period. In the 1960s, the convention was often that multiple statements were written per line. For example:
begin S1; S2; S3 end
This was likely an issue for programmers use to line-oriented languages such as Fortran. It would often lead to code such as :
if a > 0 then b := 0; else b := 1;
The first semicolon effectively ends the if statement, and hence the else becomes “dangling”.
Of course this presents some challenges to programmers use to using a C-like syntax. C, like Ada, both use the semicolon as a terminator (likely adopted from PL/1). In the case of Ada, this is taken a step further with even end statements getting the semicolon treatment. People often get these two things separated. For more on the use of the semicolon in Pascal, see the post “Pascal’s Achilles Heel”.