First programming language – multiple languages anyone?

So most courses which teach introductory programming teach one language. Or at least they do now. Many years ago, before the 1980s, courses on introductory programming did not necessarily focus on any one particular language. In some cases they taught the craft of programming using algorithmic languages based on some Algol-like syntax. In other cases they taught concepts by looking at how they were implemented in multiple languages. That’s right multiple languages. Introductory courses weren’t just about concentrating on the syntax of C, Java or Python. Is there any reason why one couldn’t teach the notion of decision statements and illustrate them in two, three or four languages? Or loops? Or modularity for that matter?

The simple answer is no. We should be teaching introductory programming courses which concentrate on programming artifacts and use languages to illustrate these artifacts. Which languages? Who cares – choose a bunch – C,  Ada, Fortran, Python. Just make sure they are somewhat different in their structure, don’t just choose C, C++ and Java – they’re all have the same ancestry (i.e. C). Consider a simple for loop. In C:

for (i=1; i<=100; i=i+1){
    x = x + i;

in Ada:

for i in 1..100 loop
    x := x + i;
end loop;

in Fortran:

do i = 1, 100
    x = x + i
end do

or in Python:

for i in range(1,101):
    x = x + i

The syntax may be different, but the end result is the same.

Why Cobol is feared.

In the course I teach on legacy software, Cobol is the most feared of all languages. Why? Probably because it does not look like any language students have seen before. After a couple of weeks of coding Cobol, they often run away screaming, or sit very silently in a corner. It is feared largely because it doesn’t things in different ways – and this can lead even the most confident programmer to have doubts about what they are coding. The challenge is that a Cobol program may run, even though there are inadequacies in the syntax. Not grievous issues, but small silent things. A good example is a cascading if statement. A C programmer is normally happy writing code which looks like this:

if (aNumber < 0)
    printf("the number is negative");
else if (aNumber > 0)
    printf("the number is positive");
    printf("the number is zero");

A novice Cobol programmer will try something of the form:

if aNumber is < 0 then
    display "the number is negative"
else if aNumber is > 0 then
    display "the number is positive"
    display "the number is zero"

This will work, however it will raise a warning of the form “IF statement not terminated by END-IF“. In more complex code, it could lead to problems with the program logic. The code should be written as:

if aNumber is < 0 then
    display "the number is negative"
else if aNumber is > 0 then
         display "the number is positive"
         display "the number is zero"

Yet Cobol provides another, simpler way to write this piece of code:

evaluate true
    when aNumber < 0 display "negative"
    when aNumber > 0 display "positive"
    when aNumber = 0 display "zero"

So Cobol need not be feared. It is just different.

Don’t underestimate the power of the Forth

Let’s look at an example program in Forth – the ubiquitous Greatest Common Divisor, solved using the Euclidean algorithm. To find the GCD of two numbers, a and b (a > b),  a loop repeatedly replaces a by b and b by a mod b while b is not zero. Here is the Forth code:

: gcd ( a b -- gcd )
begin ?dup while tuck mod repeat ;

This does not look like any regular sort of program does it? It’s called in this fashion:

cr 6 10 gcd .

What does this code actually do?

The first line, : gcd ( a b — gcd ) sets up a function of sorts.

The phrase begin ?dup while … repeat executes the loop while the top of the stack is non-zero. ?dup does not duplicate the top item if it is zero.

tuck ( a b — b a b ) copies the top item underneath the second item on the stack. It is equivalent to swap over.

mod ( a b — a%b ) takes the modulus of the top two items on the stack.

Together, tuck mod does ( a b — b a%b ).

Having fun yet?

If you want to play with Forth, gforth is a nice little compiler.

History of languages – it starts with Fortran

In the mid-1950’s the first key programming language appeared in the guise of Fortran. Fortran was developed in 1954-57 by John W. Backus and his team at IBM. Its name is a portmanteau of Formula Translator/Translation. This language was widely used by scientists to write programs to solve numerically intensive problems. It is a language that has persisted for over 50 years, with its latest incarnation appearing in 2008 (Fortran 2008). In today’s terms the original Fortran would be considered limiting as it only included IF, DO, and GOTO statements, but at the time, these structures were an immense breakthrough. The basic datatypes in use today were created in Fortran, these included logical variables (TRUE or FALSE), integer, real, and double-precision numbers. One of Fortrans constraints was its inability to deal with I/O. Fortran was quickly adopted by scientists for solving numerically demanding problems. Fortran provided the basic ideas for the evolution of modern languages (not that Fortran 2008 isn’t modern mind you!).

In the years that followed the introduction of Fortran, programming languages proliferated. This was understandable given that after some experience with a new language, deficiencies would be found that required a “new” language to correct them. Sometimes it was also easier to get additional capabilities by designing a new language rather than modifying an existing one. By 1971 there were approximately 148 different programming languages [1].

Fortran was designed by John W. Backus in 1953 at IBM, as an alternative to using assembly language on the IBM 704 mainframe. The first Fortran compiler appeared in April 1957. Why was Fortran created? In an interview with Backus in 1979, he said: “Much of my work has come from being lazy. I didn’t like writing programs, and so, when I was working on the IBM 701 writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs.”

Whilst the original language had no recursion, was programmed for punch cards, written in uppercase, and made extensive use of gotos, newer renditions have incorporated many programming language features added throughout the past sixty years.

[1] Sammet, J.E., “Programming Languages: History and Future”, Communications of the ACM, 1972, Vol.15, No.7, pp.601-610.

The first programming languages (aka compilers)

“Life was simple before World War II. After that, we had systems.” – Grace Hopper

In 1951 Grace Hopper developed the first English-language data-processing compiler, for the A-0 System (Algorithmic language version 0) language. Computers could then be programmed using written instructions, or in the case of A-0, translate symbolic mathematical code into machine code. A-0 evolved into A-2, the first assembly language compiler. However the problem with these compilers is that they still lacked the ability for novices and non-programmers to effectively write programs – they were inherently un-user-friendly. One of her next compiler successes, FLOW-MATIC (1955), was designed to translate a language that could be used for typical business tasks like automatic billing and payroll calculation.

These infant years of programming language development introduced some of the core language concepts such as input/output and if statements. Few of these languages survived very long, however the breakthroughs in their designs allowed their descendants Cobol (1959), and Fortran I (1957) to make major impacts on computing.


Programming in PL/I

Ever programmed in PL/I?

PL/I stands for Programming Language One – it was suppose to have been called NPL (for New Programming Language), but apparently the National Physical Laboratory (UK) objected, so they had to think of a better name. The first compiler appeared in 1966. In a paper written by Richard Holt in 1973 [1] – “Teaching the Fatal Disease (or) Introductory Computer Programming Using PL/I“, he poses an “An Economico-Academico–IBMical Inevitability“:

PL/I is better than Fortran or Cobol because :

  1. PL/I can do what both of those can.
  2. It is usually easier to say it in PL/I.
  3. PL/I has somewhat reasonable control structures (DO-WHILE and IF-THEN-ELSE) .
  4. PL/I doesn’t have a vast number of silly restrictions.
  5. If you ever thought you wanted it, it is likely to be in PL/I.

Unfortunately PL/I never did displace Fortran or Cobol. To sum it up, here’s a nice quote from Dijkstra [2]:

Using PL/1 must be like flying a plane with 7000 buttons, switches and handles to manipulate in the cockpit. I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language —our basic tool, mind you!— already escapes our intellectual control. And if I have to describe the influence PL/1 can have on its users, the closest metaphor that comes to my mind is that of a drug. I remember from a symposium on higher level programming language a lecture given in defense of PL/1 by a man who described himself as one of its devoted users. But within a one-hour lecture in praise of PL/1. he managed to ask for the addition of about fifty new “features”, little supposing that the main source of his problems could very well be that it contained already far too many “features”. The speaker displayed all the depressing symptoms of addiction, reduced as he was to the state of mental stagnation in which he could only ask for more, more, more… When FORTRAN has been called an infantile disorder, full PL/1, with its growth characteristics of a dangerous tumor, could turn out to be a fatal disease.

And guess what – it’s still out there, being used in business and industry.

[1] Holt, R.C., “Teaching the fatal disease (or) introductory computer programming using PL/I”, Sigplan Notices, 8(5), pp.8-23 (1973).
[2] Dijkstra, E.W., “The Humble Programmer“, ACM Turing Lecture 1972

May the Forth be with you

If the software on the Death Star were written in anything, it was likely Forth. Forth is an interesting language. Firstly the languages mechanism is based entirely on a stack. That’s right, a STACK.

So being an interpreted environment, adding 6 and 23 together can be done in the following manner:

6 23 + . <enter> 29 ok

Forth interprets 6 and 23 as numbers and pushes them onto the stack. Both “+” and “.” are considered pre-defined words, and therefore are identified and applied. The “+” adds 6 and 23 and leaves the answer, 29, on the stack. The word “.” removes 29 from the stack and prints it to standard output.

In many respects, Forth is NOTHING like a traditional compiler. It’s environment is interpreted, and is more akin to an operating system in how it works.


Lego and imagination

There is no piece of Lego I covet more than the Death Star… well, maybe the Super Star Destroyer (but at over $1000 – it isn’t going to happen). At 3152 pieces and over 48″ in length, the SSD is a piece of “crazy”. The Death Star of course is more affordable, at $500-odd. At 3,803 pieces it’s not exactly a small model. But here’s the thing – LEGO has become more about building a specific item, than about playability, and thinking outside the box.

When I was a kid, the Lego I received from my relatives in Switzerland usually consisted of a box of many different parts, and sometimes there was some book included to give you ideas about what to build. Consider this letter from LEGO to parents in 1974, first posted on Reddit by user fryd_ first.

Letter from Lego circa 1974

Specifically, consider the phrase “It’s imagination that counts. Not skill.” I mean LEGO has always had these sets to build specific items, but years ago they also had what were called “Universal Building Sets”, where you could build numerous different things, or explore your own creations. As cited on the example set below: “…provides a full range of creative possibilities…”. Now the closest you can get are the boxes of LEGO “Creative” bricks, from their “Classic” collection (or maybe the Architrecture Studio all in white). But ultimately it’s probably not as cool as “The Lonely Mountain” set from the Hobbit collection with a lego Smaug™.

Universal Building Set

Universal Building Set

Real creativity may not be a goal anymore. Getting all the sets may be. I mean who wouldn’t love to own all the Star Wars models. Problem probably lies with where to put them when you’ve built them all! Lego sets have moved away from imagination into the realm of skill – it does take skill to put together 3000+ pieces of LEGO, but it doesn’t take much imagination if you are following the instructions provided. I recently bought my (13 year old) daughter the “Ghostbusters” Lego set (No.21108) – one of those limited time sets. It looks cool, and intricate – at 508 pieces with a specified age of 10+. The “building instructions” are 92 pages long – which may make sense considering there are 162 different parts in the set. I imagine that there are 5-year old kids quite capable of putting this (or the Death Star) together because frankly you don’t even need to read, just have the ability to follow visual instructions.

Some will say, that you could build anything you want out of these “sets” – and yes these is technically true. You could buy a whole bunch of sets, dump them together ad build anything you want – however the sets by themselves are limited, because of the specificity of the parts. I could take the Ghostbusters car and build, well – a car that resembles a 1959 Cadillac chassis.