Banishment of goto

Coding practices are not always as robust as they should be. In 2014, a piece of Apple’s OpenSSL code contained an issue with a single spurious goto statement. This was not so much an error with goto, but rather a failure to (i) check the code through and remove the duplicate goto, and (ii) failure to use a compound statement. 

if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
    goto fail;
    goto fail;

Despite the indentation, (this isn’t Python) if the if statement is false, the second goto is always executed thus skipping a load of checks and compromising SSL encryption. The real problem though is the proliferate use of goto statements in Apple’s code base, long considered bad programming practice. So, Apple may have solved the problem by introducing Swift. So far they have not added a goto statement to the language – it has been banished. Let’s hope it stays that way. It is not the first to banish goto, Python doesn’t have a goto, neither does Modula-2 or PL/I.

BLISS was designed without goto, introducing instead a cornucopia of exit statements in the  first version, also eliminating labels. BLISS excluded goto for two reasons: (i) the runtime support needed by unrestricted goto’s, and (ii) the authors feeling that the general goto, because of the “implied violation of program structure”, is a “major villain” in programs. BLISS-11 fixed this by providing an exit by means of a LEAVE statement, which  provided a restricted form of forward branch.

History of languages: automatic programming

Where did modern languages come from? How did they evolve?

For many, they believe Fortran was the first language, but it did not evolve out of the ether.

Heinz Rutishauser, in his manuscript Description of ALGOL 60 (Springer Verlag, 1967) made the following statement which sums up the early years of programming machines, before the advent of programming languages as we know them today.

In the early days of automatic computing, programming was considered some kind of art, since, indeed, special skill was required to describe the entire computation in advance in a rather queer notation. With the advent of fast computers, however, the need for writing a program for every problem very soon became a nightmare and left no room for artistic feelings.

It was very likely that the inherent increase in program complexity and size made writing them by hand in machine code perplexing. Was there a notation somewhere between machine code, and the mathematical notation used to represent what was to be done? The answer was yes, and it was in the form of automatic programming. This was started by M.V.Wilkes (UK), and G.Hopper (USA), around 1950. These automatic programming systems were classified into three different forms:

  1. External machine code – instructions appear as a mnemonic operation symbol, and a decimal address.
  2. Assembly languages – operation symbols, use of algebraic symbols to represent operand addresses, and jump destinations.
  3. Algorithmic languages – standard mathematical notation for describing arithmetic operations, and dynamic elements for describing the flow of a computation.

The usability of each of these methods was reduced by how easy it was to translate them to machine code. External machine code was simple, assembly languages required the allocation of addresses which complicated matters, and algorithmic languages were extremely complicated.

 

History of languages: bohemian times

After the freewheeling fifties, programming languages entered a more bohemian time, with many languages appearing, and those 1950s languages evolving, as new programming ideologies evolved. First out of the gate was ALGOL 60 which evolved from the fledgling Algol 58. It was common practice to “re-design” a language quite considerably, rather than make tweaks to an existing language. As such languages such as Algol 58 often fell out of favour as people adopted newer versions of a language. Algol was to spawn what would eventually become quite an issue in the computing community: augmentations, extensions, and derivations, leading to numerous dialects of a language. ALGOL 68 appeared just before its contemporaries C and Pascal, however due to inherent complexities, never really hit it off. (ALGOL 68 was used by European defense agencies, however the US decided to hedge its bets with Ada.) There was some controversy over the design of a successor to Algol 60, with Wirth and Hoare going on to develop their own successor, Algol-W. Algol 68 was a major revision to Algol 60, whereas Algol-W included more subtle changes. The 1960’s also saw a number of updates to both Fortran and Cobol.

Languages had also begun to diversify into differing realms. In 1964, Kenneth E. Iverson introduced APL, (A Programming Language) a concise symbol-based language adept at dealing with arrays. Early versions of the language contained no control structures, and whilst Cobol may have strayed too far into “English-language” syntax, APL may have gone too far with its mathematical syntax. APL was popular with those doing computer graphics, but its use declined in the 1980s, due in part to the advent of languages such as MATLAB and GNU Octave. The 1960s also saw the second generation of programming languages evolve, those influenced by the likes of Fortran or Algol. In 1964 PL/I (Programming Language One) made its appearance, supposedly for use in data processing, numerical computation, scientific computing, and systems programming. However it was a more complex language than either Fortran or Cobol, from which it had evolved, and was not extremely successful. Algol also influenced the design of Simula, the first OO language which appeared in 1967. In an educational context, Logo appeared in 1967, famous for its use of “turtle graphics”, influenced by Lisp.

P.S. If you want to have some fun with a Logo interpreter, check this out.

I code, therefore I am.

Programming languages were created to make the task of programming computers to solve problems easier.

What makes a programmer good at what they do? Is it their inherent level of “geekiness”? Their ability to think like a computer? Their fondness for pizza and caffeinated drinks? The need to spend countless hours playing games? The answer is none of these. A good programmer has the ability to solve problems, and think “outside the box”. Programming languages are merely tools. In the same way that a woodworker uses a jack plane to flatten a board, or a plow plane to cut a groove in a board, the programmer uses a loop to make a piece of code repetitive. We use the structures within languages to fashion programs. The larger quandary for the programmer, is not the language, but the ability to solve a problem, and express the solution in a manner capable of being translated into a program. Failure to adequately solve a problem may lead to programs that are less than effective. This normally manifests itself as programs that lack simplicity, or have features that may “cloak” the programs failings. Some of the other characteristics of a good programmer?

The ability to experiment – Design an algorithm and try it out. Try weird things and see if they actually work. Sometimes solving a problem means thinking creatively. It’s quite possible that the problem is too hard to code, but you will never know if you never try.

The ability to be patient – Rome wasn’t built in a day, and programs aren’t either. A good programmer will take a slower path, and build a more robust program.

The ability to broaden their skills – The programmer who just knows language X, and only wants to know language X, won’t have a great repertoire of skills. A good programmer is skilled at 1-2  languages and is knowledgeable about  2-3 more. You don’t have to be a coding guru. People who are experts in something sometimes don’t see the forest for all the trees, i.e. they get stuck in some minute detail, and fail to see the overall picture.

The ability to walk away – Programming is not your life. If you get stuck on something, then take a break – things always become clearer when you step back from a problem. Don’t leave some algorithm in your code “because I spent a long time designing it”. If it doesn’t serve the purpose it was designed for – toss it. If it’s too complex, re-design it. Code doesn’t care, it doesn’t have feelings, and won’t come back to haunt you. Just hit the delete key and move on. Don’t have an emotional attachment to your code – it’s just code.

The ability to be okay with failure – Not every program will work the first time. Most programs will fail at some point, but a good programmer will be able to cope with the failure, and use it to his/her benefit. Learn from failure.

The good programmer will also have an interest in things other than computing. Computing isn’t the be-all-and-end-all. You won’t build good software if you spend all day talking to a machine, then go home and talk to another machine. Peruse some woodworking blogs, and you quickly begin to realize that a lot of these guys build software for a living, and build tangible wooden things as a hobby. Some people cook as a hobby, others renovate their houses, some do photography. The diversity makes you a better software crafts-person.

Predicting the future… in the 1950s

In January 1956, The American Weekly published an article looking at the future. Many magazines in the 1940s and 1950s tried to peer into the future. So how much of it has become reality? After the depression, and Second World War, the 1950s were a time of hope, prosperity, new products, and the move to the suburbs. Things had changed so quickly that there was hope that the near future would bring even more wonderful automation into the life of the average person. Here we will look at the house of the future. One quote states: “How would you like to bake a two-layer cake in a cold oven in six minutes? Press a button and have ultrasonic, or silent, sound waves wash the dishes?” It is likely that people in the 1950s looked upon such statements with more awe than now. Nowadays if somebody makes an incredible statement about some future technology, few if any are truly awed. Has technology become too ubiquitous? Let’s explore the future in 1956 by looking at what became reality and what was truly science fiction.

futureHouseCOV

The front cover, by artist Fred McNabb, shows the family of the future having a party.

“Friends are arriving by personal helicopter. The push-button food unit is keeping some things cool, thawing others, and popping ice cubes out. The microwave stove is making the cooking chore a matter of a few minutes. Portable TV provides the floor show and ultraviolet rays from the funnels around the lawn are drenching the blooming flowers. In the background, the chemically treated shrubs are holding their color against the winter and far in the distance are those typical signs of a future day – a monorail train and needle-pointed atom aircraft.”

  • Moving sidewalks – “will take you to and from work, shopping, even to a ball game or movie“. Reality Moving sidewalks are real, but they only tend to exist in airports to help people walk the long terminals. Industrial conveyors “extending for miles across the countryside“, not so much.
  • New skylines – “a towered wonderland of glass and porcelain-enameled steel, or aluminum and titanium and beryllium“. Reality Although glass and aluminum are more the go, largely because they are cheap. They did predict that skyscrapers wouldn’t likely “get up to 100 stories” – which of course they did. Weird prediction considering the Empire State Building is 102 stories and was built in the 1930s.
  • Personal helicopters – “step into your own helicopter and hop over to the nearest airport“. Fiction Well, reality if you are wealthy and can afford (i) the helicopter and (ii) the place to land it. “Helibuses like local trains” – too expensive, even if anybody wanted to do this.
  • Travel by rockets – “zip across the United States – or across the Atlantic – in an hour“. Fiction. The closest we came was with the Concorde, which took somewhere between 3-4 hours to get across the Atlantic.
  • The jet age – Airplanes that “whisk some 135 passengers across the Atlantic in little more than six hours“. Reality Surpassed from a capacity point-of-view, if not from a speed perspective. In some respects the golden age of air travel has passed.
  • Atom power for planes – Within 20 years, American Airlines predicted it would have “atom craft“, allowing people to “commute to London“. Fiction Likely really no. Not now, not ever. Not a good idea. Ships, yeah no problem. Atomic planes? F-O-R-G-E-T it.
  • Space flight – “that long-talked-about trip to the moon“. It’s “an approaching certainty” they concluded. Possible Trips to the moon? Yeah you could likely get there from a technology perspective. Nothing to do when you do get there though. We actually might be regressing from a space exploration point-of-view.
  • Cars – “a telescopic eye… and radar brakes“. They though engines would move to the rear of the car, and actually be replaced by gas turbines, or what they termed “pinwheel jets“. Reality. There are cars like this – they are called dragsters. That’s a recipe for people getting hurt than they do now. Use of a “telescopic eye“, which is essentially an infrared camera? Reality. Cars like the Audi A8 have a Night Vision Assistant that works this way.
  • Railroads – “the much publicized double-decker cars of today will be commonplace“. Reality. They became a reality in the 1950s, and these days these double-decker cars are used extensively on commuter lines. “Monorails … will provide fast service between some metropolitan centers and outlying cities.” Fiction. Monorails have been used, but often only in an urban context – still a work in progress. “atom power for locomotives“. Fiction. Like atomic planes  – again N-O-T a good idea.
  • Atom power for shipsReality. Nuclear powered naval ships like the USS Long Beach and USS Enterprise appeared in 1961 and were successful. Nuclear powered submarines, ok. Merchant ships, apart from nuclear-powered icebreakers? Not so much. “atom-powered undersea freighters“, Fiction. Easier to make ships larger.

Computer science and AI – Stop the madness

Everyday we read some news article about the great strides made in designing things that make use of “artificial intelligence” (AI). For some it brings the fear of “rise of the machines”, so aptly portrayed in the “Terminator” movies. The big question of course is: Do we need artificial intelligence? The sad reality is that we can barely understand our own intelligence, so why would we want to create an artificial one. Okay, so a machine called Watson, built by IBM is capable of winning Jeopardy! by answering questions posed in natural language – just like humans would. The crux is of course that Watson had access to 200 million pages of structured and unstructured content – including the ENTIRE contents of wikipedia. So considering how fast machines are at finding data, it’s not incredibly hard to see a machine being able to decipher a question using speech recognitions algorithms, and find an answer. Very few humans can match the machines ability to find data. But is it AI? The quick answer is no.

Let’s see a machine evolve the same way humans do. Build a machine. Give it an operating system, and access to sensor feeds, but nothing else. Don’t write any software for it. Turn it on. If it is true intelligence it will evolve, build its own programs, become self-aware, build it’s own repertoire of data, learn. Evolve, just like humans and other animals do. The truth is that this will never lead to artificial intelligence. We create AI though algorithms and data. This “AI” will get better as we write better algorithms. But will the AI be able to feel? Be creative? Perceive as we do? The answer is no. In fact AI seems to be one of those fields that we could likely do without. Will it help make a smarter, “intelligent” thermostat? Maybe. But then maybe we don’t need intelligent thermostats. Maybe we need better insulation in our houses.

All this research may lead to some breakthrough somewhere down the road, and that’s great. Problem is, the planet we call home needs solutions to its problems now. In 50-100 years, it will likely be too late.

 

History of languages: the freewheeling fifties

“Programming in the America of the 1950s had a vital frontier enthusiasm virtually untainted by either the scholarship or the stuffiness of academia”
John Backus

The 1950s were the pioneering days of programming languages. Much of the code written until the 1970s would be unstructured. Many of the modern language constructs were yet to be designed, and compilers were often closely linked with the particular memory requirements of a machine. The 1950s heralded the move away from programming by means of machine language to languages with a more “English-like” syntax. Here’s what John Backus had to say about the 1950s:

“Just as freewheeling westeners developed a chauvinistic pride in their frontiersman-ship and a corresponding conservatism, so many programmers of the freewheeling 1950s began to regard themselves as members of a priest-hood guarding skills and mysteries far too complex for ordinary mortals”.

Of major note was the development of Fortran. Backus states that Fortran did not evolve out of some “brainstorm about the beauty of programming in mathematical notation”, but rather simple economics: programming/debugging costs > running costs. Part of the issue was dealing with floating-point calculations. In 1954, the IBM 704 was released, the first mass-produced computer with floating-point arithmetic hardware. This had the effect of speeding up floating-point calculations (by a factor of 10), so effort was now diverted to making the job of the programmer easier. The goal of the FORTRAN Project was “to enable the programmer to specify a numerical procedure using a concise language like that of mathematics”. At the time it was expected that such as system would reduce the coding and debugging task to less than one-fifth of what it had been.

By the late 1950s, programming language design was in its heyday. In 1958 a committee was formed to design a universal, machine-independent programming language. The result was ALGOL, or ALGOrithmic Language, what we now know as Algol58.  ALGOL became the blueprint for the way languages would evolve for the next three decades (and spawn some crazy languages: MAD, JOVIAL, and NELIAC). ALGOL used bracketed statement blocks and was the first language to use begin end pairs for delimiting them. 1959 (specifically May 28/29) saw the start of work on COBOL, a language dedicated to business computing. By 17 August 1960 the first Cobol compiler was completed. Cobol, unlike both Algol and Fortran relied heavily on the use of “English”-like syntax. The same year of 1958 also saw the birth of Lisp, designed by John McCarthy.

The end of the decade saw the four distinct families of languages emerge: Fortran, Algol, Cobol, and Lisp. Whilst the use of Algol would eventually diminish, it likely had a greater influence on the future programming languages than its contemporaries Fortran and Cobol, who would themselves evolve, but who influenced far fewer languages. Lisp would also evolve, but in a more niche way in the sphere of artificial intelligence.

Ironically, the major influence of Fortran and Cobol (and Algol) was PL/I, who would itself slowly disappear – well not quite, IBM still makes a PL/I compiler, which likely means there is some serious code out there written in PL/I.

[1] Backus, J.W., Beeber, R.J., Best, S., Goldberg, R., Haibt, L.M., Herrick, H.L., Nelson, R.A., Sayre, D.,  Sheridan, P.B., Stern, H., Ziller, I., Hughes, R.A., Nutt, R., “The FORTRAN automatic Coding System”, Proc. Western Joint Computer Conference, (1957).