The Wirth Trinity – Modula-2

Pascal had become quite popular in the 1970s. This was in part attributed to the fact that it was a somewhat easy language to learn. It was not as fast a language, as say Fortran, but it did incorporate the notion of structured programming, so that by the late 1970s programmers had learnt to code without using the dreaded goto statement. Wirth realized that Pascal had its shortcomings [1], and considered Pascal’s I/O to be “inadequate and inflexible“. A new language was to be designed, but instead of being called Pascal-2, it was called Modula-2. To experiment with multiprogramming primitives, Wirth contrived a rudimentary language called Modula [7]. Modula was never intended to be a language like Pascal.

In 1976 Wirth spent a year at the Xerox Palo Alto Research Center (PARC). Here he learned about hardware design, and on returning to Switzerland began work on what was to become Lilith, a personal workstation [3]. Wirth had some fixed constraints on the system: single-user, single processor, and all software written in a single language [1]. As to the language, Pascal was not capable, nor was Modula, at least not by themselves. The solution? Modula-2, a conglomeration of the bloodlines of Pascal, Modula, and Mesa (a language being developed at Xerox PARC). The Lilith was marketed as “The Modula Computer”, and sold for US$22,750.

Modula-2 was based on the concept of a module, which allowed for high level abstraction, and low-level facilities. The language was defined in 1978, and implemented on a PDP-11, with the first Modula-2 compiler was released in 1980. One of the reasons Modula-2 didn’t succeed was the lack of a good free compiler (there is one now, but only for Windows).

Influences?

Modula-2 was influenced by Pascal, Modula, and Mesa (a Pascal offspring).

Why was it developed?

To alleviate the shortcomings of Pascal, and to design a language more apt for the time. Whereas Pascal was designed more for teaching, Modula-2 was designed as more of a systems-language.

What did it actually do for programming?

Modules. Derived from the notion of abstract data types, and incorporating information hiding, it built on the concepts of Mesa. This allowed things like I/O to be removed from the language proper, and encapsulated in a module forming a “standard library”. The module structure isolates its contents from the surrounding program, and modules can be separated into a definition and an implementation part. All communications to other modules occurs through the imported and exported identifiers. The module can also be regarded as a representation of the concept of an “abstract data type” postulated by Liskov in 1974 [10].

Language features (Modula) “dumped” from Pascal

  • Variant records.
  • Built-in I/O – move to libraries.
  • No goto statement.
  • Packing of data – Pascal allowed data in record and array structures to be packed.

Improvements over Pascal (i.e. what was added/modified)

  • Source code is case-sensitive, reserved words are in UPPERCASE.
  • Open arrays.
  • The PROCEDURE type.
  • Flexible declarations: types, variables, and procedures can be mixed together, as
    opposed to Pascal’s strict const, type, var, etc.
  • CASE has an ELSE for matching unspecified values. Also permits subranges.
  • Boolean expressions are evaluated conditionally.
  • I/O is relegated to library modules to avoid system dependencies.
  • Readability was enhanced through the use of control structure terminators:
    REPEATUNTIL, IF/WHILE/FOR – END. This eliminates the begin-end block construct.
  • The FOR statement is augmented by the clause BY. Pascal’s downto clause is missing.
  • The type CARDINAL to allow for unsigned (positive) integers.
  • Two standard procedures for incrementing and decrementing: INC and DEC.
  • LONG identifiers.
  • There are better control transfers. The statements RETURN and EXIT are used to transfer control from procedures and looping structures. HALT is used to terminate a program.
  • No goto statement.
  • No syntactic ambiguities in decisions, e.g. no dangling ELSE. IF statements always require an END.
  • A new looping statement called LOOP, providing an infinite loop.
  • Standard I/O functions, dynamic storage allocation, files via library modules.

Language deficiencies

  • No standard procedures for I/O and storage allocation (some found this quite onerous).

For a list of ambiguities and insecurities, the interested reader is referred to [8], [9], and [12].

Language Genealogy

1976ModulaWirth
1978Modula-2Wirth
1988Modula-3Designed by DEC and Olivetti. Not adopted widely in industry.

Refs:

  1. Wirth, N., “The development of procedural programming languages – personal contributions and perspectives”, in Modular Programming Languages, JMLC 2000, LNCS, V.1897 (2000)
  2. Wirth, N., “History and Goals of Modula-2”, BYTE, pp.145-152 (Aug.1984).
  3. Ohran, R., “Lilith and Modula-2”, BYTE, pp.181-192 (Aug.1984).
  4. Paul, R.J., “An Introduction to Modula-2”, BYTE, pp.195-210 (Aug.1984).
  5. Coar, D., “Pascal, Ada, and Modula-2”, BYTE, pp.215-232 (Aug.1984).
  6. Gutknecht, J., “Tutorial on Modula-2”, BYTE, pp.157-176 (Aug.1984).
  7. Wirth, N., “Modula: A language for modular multiprogramming”, Software- Practice and Experience, 7, pp.37-65 (1977)
  8. Spector, D., “Ambiguities and insecurities in Modula-2”, ACM SIGPLAN Notices, 17(8), pp.43-51 (1982)
  9. Torbett, M.A, “More ambiguities and insecurities in Modula-2”, ACM SIGPLAN Notices, 22(5), pp.11-17 (1987)
  10. Liskov, B., Zilles, S., “Programming with abstract data types”, in ACM SIGPLAN Notices, 4, pp.50-59 (1974).
  11. Collins, S., “Comparing Modula-2 with Pascal and Ada”, Data Processing, 26(10), pp.32-34 (1984)
  12. Cornelius, B.J., “Problems with the language Modula-2”, Software-Practice and Experience, 18(6), pp.529-543 (1988)

The usability of Fortran

The following are some usability criteria I put together a few years ago to evaluate Fortran. The rankings are out of 1 to 5, where 5 is the best score.

User control and freedom (4)

Fortran provides the usual series of control structures, including a select statement which allows for aggregate cases. It is unique in offering a large number of loop structures, under the guise of the do loop, including what is essentially an infinite loop, to which the user has to provide the exit condition within the loop. From a modularization perspective, it provides both functions (which return only one item), and subroutines. Arrays are easy to create, and not limited to indexing which begins at zero.

Consistency and standards (3)

As Fortran is backwards compatibility, mixing constructs from different variants, e.g. Fortran 77/90/95 etc. is possible, reducing consistency. For example, prior to F90, goto statements were still widely used, and the format statements associated with output still rely on labels. Newer dialects of Fortran allow for free-format, so the user is capable of using as much/or little whitespace as needed. There are some issues which reduce the consistency. Code invoked by a case statement must be on a separate line, which seems inconsistent with how a user would write code. There is an issue with some structures such as strings, which do not behave in the same way as character arrays. Strings for example, can be input as a whole, and allow operations such as calculating the string length, however, it is not possible to index an element of a string without using a substring, which may be inconsistent with how novices interpret strings. Fortran does not use a statement terminator, which means every statement must be on a separate line. Some of the operators are inconsistent, for example not equal, /=, the use of English words .and. for logical expressions, and using % as a member separator in its “struct”-like structure.

Error prevention (4)

Fortran reduces the likelihood of some errors by providing control-structure terminators, which “cap” a structure. For example if then...end-if. This reduces problems such as dangling-else, and helps make the code clearer. Even functions and subroutines have end-terminators. 

Recognition rather than reference (3.5)

Fortran makes less use of symbols than other languages. The exceptions, are the “power” operator, which is **. However it is easy to look at a piece of Fortran code and understand what is happening. Even subroutine parameters can be labelled as in/out/inout, so the novice programmer has a direct understanding of what is happening to the parameter. 

Efficiency of use (3.5)

Fortran is an extremely efficient language. It also contains structures which make writing a program more efficient. For example, when working with 2D arrays, Fortran allows access to a sub-array by assignment (slicing), versus the traditional approach of using a nested loop. 

Help Users Recognize, Diagnose and Recover from Mistakes (3.5)

Some of the error messages can be challenging to understand, for example failing to use the keyword “call” in front of a subroutine results in the compiler message “Unclassifiable statement”, but overall no more difficult than other languages. Novices will benefit from the compiler trapping logical errors such as the misuse of = in an if statement.

Help and documentation (3.5)

There is an abundance of material available on Fortran programming, however it is usually tailored towards a specific variant, e.g. F77, F95 etc. So it may be challenging for the novice user to quickly find and evaluate a topic of interest. For example, a user searching for for loops, may stumble onto on of the older syntaxes, which is not properly identified as such, and then incorporate this older structure – whilst the code may work, it may not be optimal.

Scalability (4)

Fortran is an extremely scalable language, in that it offers, apart from subprograms, the notion of external modules, to encapsulate like functions. It also has an interface section to link in external functions, and functions from other languages such as C.

Learnability (4)

Fortran is an extremely learnable language for novice programmers, with a medium cognitive load. It is easy to write a small program, and the user can rely more on existing knowledge than some other languages. The language shields the user from many errors which could possible occur, for example the use of = where a == should have occurred, and end-terminators. The use of pointer references is not required for either storing input, or returning information from a function. Loops are easy to construct and versatile. 

Fun with C and Pascal

When Niklaus Wirth created Pascal he did so with the purpose of creating a simple language which could be effectively used to teach the craft of programming. This was likely a response to the complex and bloated languages which had evolved in the latter half of the 1960s – PL/I, Algol-68 to name but two. None of these languages were all that conducive to teaching programming constructs. Pascal offered a rich and flexible choice of data structures and types, and a set of control structures that reflected the design of algorithms. Part of the usability os Pascal had to do with its readability. Even without any prior experience of programming, it is possible for someone to look at a Pascal program and read it with a reasonable degree of understanding. This simple program below only converts Celsius to Fahrenheit, yet it is easy to see what is happening, even to the uninitiated.

program c2f(output);
var degC, degF: real;
begin
   writeln('Celsius? ');
   read(degC);
   degF := degC * 1.8 + 32.0;
   writeln('Fahrenheit = ',degF:0:2);
end.

What most novices understand, just by looking at the code is that this is a program. They might be able to glean the fact that degC and degC are real variables. The terms begin and end are self-evident, as are read and writeln. The equation, should be quite simple to calculate. Compare this against the C version:

#include <stdio.h>
int main(void)
{
   double degC, degF;
   printf("Celsius? ");
   scanf("%lf", &degC);
   degF = degC * 1.8 + 32.0;
   printf("Fahrenheit = %.2lf\n", degF);
}

There are many more unknowns in the C program. What is stdio.h? What is void? What is scant? What is a double? Why is the & used to read in a value? What do the squiggly braces denote? The reality is that there is a reason why Pascal’s popularity soared in the 1970s – it was easy to learn. For a good period of time, possibly until the late 1980s, Pascal was the primary language for teaching programming. At some point it was supplanted but the likes of C. The rationale for the switch remains unclear, as none of these languages were designed to teach programming. Worse still are the institutions who indoctrinated their students with mind crippling languages such as Smalltalk, C++, and Java. Even the inventor of C++, Bjarne Stroustrup famously said, “C makes it easy to shoot yourself in the foot, C++ makes it harder, but when you do it blows your whole leg off“. Pascal on the other hand wouldn’t allow you to shoot yourself in the foot.

I understand the need to provide an industry-based language, but there is enough time in a university curriculum to do that. I look at it this way. Humans learn to read using very basic books, the likes of the “Dick and Jane” series – good, bad, or indifferent these books taught basic English words and phrases. It’s not like we have first graders start reading with War and Peace is it? We shouldn’t do the same with programming. We should actually wind the clocks back and use Pascal, at least for computer science students (there are better options, like Python and Julia for non-CS students).

Pascal was designed to be a small language, which makes it easy to learn, and even easy to master. From there it is possible to introduce a language like C, and instead of just a language for language sake, explore why it is different from Pascal, benefits and limitations, do comparative studies, etc. There is so much more to learn about languages than just the surface knowledge of syntax.

Recursion – Sudan’s Function

Apart from Ackermann’s function, there is another function which is not primitively recursive – the Sudan function. It was one of the first functions having this property to be published in 1927, derived by Romanian mathematician, Gabriel Sudan (1899-1977) [2]. In the mid 1920s, both Sudan and Wilhelm Ackermann were (PhD) students of David Hilbert at the University of Göttingen in Germany, studying the foundations of computation. Sudan was in Germany from 1922-1925, after which he returned to Romania. Both Ackermann and Sudan are credited with discovering recursive functions that are not primitive recursive.

While Ackermann’s function is well known, Sudan’s is not. At the same time that Ackermann submitted his paper for publication, Sudan independently submitted his own work. Sudan cited the ideas of Ackermann contained in a paper by Hilbert [4]. Ackermann also cited knowledge of Sudan’s work [3]. However Sudan’s paper remained relatively unknown, in part because of the obscurity of the journal he published it in. Published in 1927 [2], it was not until 1979 that his contribution was presented at a conference by Calude and colleagues [1]. Here is Sudan’s function:

S(m,n,0) = m + n
S(m,0,k) = m
S(m,n,k) = S(S(m,n-1,k),S(m,n-1,k)+n,k-1) for n≥1, k≥1

Here is the algorithm implemented in C:

int sudan(int m, int n, int k)
{
   if (k == 0)
      return m + n;
   else if (n == 0)
      return m;
   else if (k != 0 && n != 0)
      return sudan(sudan(m,n-1,k),sudan(m,n-1,k)+n,k-1);
}

The data in Table 1 shows some of the output for various values of m, and n, when k=1.

n/m0123456
00123456
1135791113
2481216202428
311192735435159
42642587490106122
55789121153185217249
6120184248312376440504
Table 1: Some values for m, and n when k=1

Ackermann cited Sudan’s work in a footnote [3, p.119]:

Original: Eine Arbeit, die mit der vorliegenden manche Berührungspunkte hat, wird von Herrn G. Sudan publiziert werdem. Es handelt sich bei ihr um die Definition von Zahlen der zweiten Zahlklasse, die man in ähnlicher Weise klassifizieren kann wie die Definitionen der reellen Zahlen.

Translation: A work that has some points of contact with the present one will be published by Mr. G. Sudan. It is about the definition of numbers of the second number class, which can be classified in a similar way to the definitions of real numbers.

  1. Calude, C., Marcus, S., Tevy, I., “The first example of a recursive function which is not primitive recursive”, Historia Mathematica, 6, pp.380-384 (1979).
  2. Sudan, G., “Sur le nombre transfini ωω“, Bulletin Mathématique de la Société Roumaine des Sciences, 30, pp.11-30 (1927).
  3. Ackermann, W., “Zum Hilbertschen Aufbau der reellen Zahlen”, Mathematische Annalen, 99, pp.118-133 (1928).
  4. Hilbert, D., “Sur l’infini”, Acta Mathematica, 48, pp.910-922 (1926).

Testing Julia for Speed – 2021 update

A few years ago I tested Julia for speed using a series of algorithms. Since Julia has evolved, I figure I would revisit the scores to see how things have changed. The first was Ackermann’s function. The method of calculating Ackermann is by means of an iterative version of the algorithm which uses a stack. this is partially because both Julia and Python have “issues” with recursion. All calculations were performed on a MacBook Pro with an M1 chip. All timings are in seconds.

Ackermann(4,1)20162021
Julia (1.5.3) (Int built-in stack)121.7923.08
C (gcc)32.7725.51
Fortran (gfortran)37.5117.2
Python3 (built-in stack)878.1334.7

Interestingly, the faster speed was provided by Fortran, followed by Julia, and C in third place. All times were faster than the 2016 measurements, even Python. Next we retested the Bubblesort.

Bubblesort (100,000 integers)20162021
Julia (1.5.3) (Int built-in stack)184.39412.35
C (gcc)34.47528.072
Fortran (gfortran)35.12225.9
Python3 (built-in stack)3059.4451038.44

Here Julia takes the clear lead, with C again in third place. The fact that Julia is less than half the speed of C is quite miraculous. Finally we retested the application of a Mean Filter to a grayscale image.

Mean Filter (image 2144×6640)20162021
Julia (1.5.3) (Int built-in stack)3.4851.4
C (gcc)0.8480.8
Fortran (gfortran)1.0310.49
Python3 (built-in stack)245.266.03

Here Fortran actually holds as the winner, with C seconds and Julia in third. Now some of these increases can obviously be attributed to improvements in chip design over the intervening 5 years. With Julia the improvements seen are likely efficiency improvements to the compiler itself. What is more interesting it that while C was the winner in 2016, this is not the case now. Now these programs are no frills, meaning that the coding for each was as similar as possible, without the use of fancy things like vectorization, which would have likely made Python more efficient, but makes it hard to accurately compare the languages. In the case of the mean filtering, it is not surprising that Fortran outpaces C, because Fortran allows array slicing, and in C this would require extra loops.

Why were these three algorithms chosen? Largely because they exude slowness, or heavy use of resources, which is perfect for this kind of test.

The Wirth Trinity – Pascal

Algol was likely minimally successful from the point of view of being used extensively in industry, but was used in academic environments. There were many attempts to extend it’s applicability, which lead to Algol 68, and Algol W. However the complexity of Algol 68 ultimately lead to that variant’s demise. PL/I was an attempt to create Fortran VI, however, combining features from Algol, Fortran and COBOL, but the product was once again an extremely large language. Both Algol 68 and PL/I exemplify the notion of “Swiss army knife” approach to language design – providing every conceivable feature. What was required was a smaller, more compact language – enter Pascal.

Wirth began design on Pascal (named after French mathematician Blaise Pascal) in 1968, with a compiler written entirely in Pascal, on a CDC 6000 series mainframe. The first language compiler appeared in 1970. The language was tweaked in 1972, and became an ISO standard in 1982. In describing Pascal, Wirth remarked that “the guiding idea in the layout of the syntax of Pascal was simplicity, due to the recognition that structures difficult to process by compilers are also difficult to master by human readers and writers”. The highlights of Pascal [1] were:

  • Simple control structures for decisions, and repetitions.
  • Scalar data types = boolean, integer, real, char and enumerations.
  • Ability to construct complex data structures using records, arrays, and sets.
  • Strict static typing – every constant, variable, function or parameter had a type.
  • Dynamic data structures built with the use of pointers.
  • Recursive procedures.

Influences?

Pascal was heavily influenced by Algol, and is often called Algol-like.

Why was Pascal developed?

Pascal was born out of what Wirth terms “an act of liberation”. Liberation from the prospect of using Algol or Fortran as languages to teach programming, and liberation from the design constraint imposed by committee work.

What did it actually do for programming?

Pascal was one of the first languages built from the ground up with the notion of structured programming.

  • With no commercial backing, Pascal succeeded on its own merits, and was implemented on systems ranging from Cray supercomputers to personal computers. Programmers who felt “straitjacketed” by writing programs in BASIC, flocked to Pascal.
  • It was an ideal language for teaching programming.
  • Pascal was drafted as the basis for the DOD’s Ada project.
  • It introduced records into scientific language (although Algol-W really did this).
  • It introduced a usable case statement

Design considerations

The general idea dominating the design of Pascal was to provide a language appealing to systematic thinking, mirroring conventional mathematical notation, satisfying the needs of practical programming, and encouraging a structured approach. It should be simple, have the ability to handle non-numeric data, be suitable for teaching programming, and have the compile-time and runtime efficiency of Fortran.

Language features

  • Records, and variant records.
  • Algol-60 had blocks (local declarations + statements) and compound statements
    (statements only), whereas Pascal eliminated the block.
  • More, yet simpler control structures than Algol-60.
  • Use of a real assignment operator, :=
  • Strong type safety.
  • Case insensitive.
  • Native set operators.

Language deficiencies

  • Keeping the goto statement.
  • Syntactic ambiguities inherited from Algol – the lack of explicit closing symbols for
    nestable constructs, e.g. dangling-else.
  • Inability to support separate compilation of modules hindered the development of
    large programs.
  • The flawed case statement, which lacked an else clause.
  • Fixed size of arrays, precluded the use of general math and string libraries.
  • Fortran, Cobol programmers felt handcuffed by Pascal’s compulsory declaration of
    variables.
  • No exponentiation operator.

Criticisms of the language

  • Lack of block structures.
  • No dynamic arrays.
  • Lack of the “2nd” form of conditional (inline if).
  • Labels and the goto statement.
  • “Unnatural” unification of subranges, types and structures.
  • The difference between procedure and function is marginal.
  1. Wirth, N., “The development of procedural programming languages – personal contributions and perspectives”, in Modular Programming Languages, JMLC 2000, LNCS (1897).

Myths about becoming a programmer

I see a lot of people on sites like Quora asking about becoming a programmer. The reality of course is that programming is not easy, nor is it for everyone. Here are some common myths dispelled.

Myth 1: “I need to be super good at math.”

No, not really. Being great at calculus might be helpful in certain applications where you need to solve equations to derive an algorithm and implement a solution, but most universities put to much emphasis on esoteric math skills. Mathematical knowledge is good in areas line image processing, but too many times too much emphasis is placed on it at the expense of really important things like problem solving.

Myth 2: “I need a degree in computer science.”

Sure it helps, in the right context. Some of the most interesting people in computing never went to university, yet they achieved incredible things. If you have two people: one went to university and got an A average, the other self-taught themselves programming and created an incredibly successful app which sold millions of copies. Who would I hire? The latter. Why? Because they have already proven themselves without the need for some academic hubris. That and they have a portfolio of experience, and are self-motivated. There are many stories like this. Conversely there are people who barely pass their courses and still get a degree. So I want someone with a 55% average programming software for a nuclear reactor? Hardly. Oh, and remember, most of the people teaching computer science in institutions of higher learning don’t actually design software for a living.

Myth 3: “I need to be super brilliant.”

Define brilliance? The ability to get straight A’s in university? Hardly. You need to be a hard worker, and more importantly than be brilliant, you need to have a good sense of exploration, and willing to think outside the box. Clever algorithms come into existence from people who have the ability to think beyond current knowledge, into the great beyond. Brilliance comes in many forms, not just academic grades.

Myth 4: “I need to learn the best language.”

Define best? There is no best language, despite what anyone says. Every language has some inherent benefit or weakness and is geared towards slightly different things. In reality to become a good programmer you will need to learn about many different languages, and how they interact. Never have the attitude that “C is best”, or “I only code in Java”. Boring… everyone learns these languages. If you want to stand out learn the languages that others don’t, like Fortran and Ada.

Myth 5: “I’m done learning.”

Many people seem to believe that once they have a degree they are done learning. Wrong. Computing, like many disciplines continually evolves. You will need to learn new things all the time, and in fact maybe unlearn some of what you learned in university. University often doesn’t relate completely to the real world. Case in point, many years ago academia discarded teaching languages like Cobol because they thought it wasn’t relevant… news flash… it’s as relevant today as it was in 1970.

Myth 6: “Once I have mastered the syntax of a language, I can do anything.”

Mastering syntax is one thing, being able to actually implement an algorithm is another all together. There are often many ways to implement an algorithm, some might be more efficient than others. You have to have an innate understanding of how a language can be used to implement an algorithm. In some cases the language may not even be the best to implement the algorithm. For example you can master Java syntax, but it would not be the best language to implement a real-time control system for an autonomous train.

Myth 7: “I’m good at gaming, so I’ll be a great coder.”

Likely not. Gaming and actually designing and implementing software are world apart. If you don’t have any interests outside of the computer I would imagine you aren’t really able to think outside the box… and I don’t want to hear any malarkey about have great hand-eye coordination, and multitasking skills… it’s hyped up baloney.

Myth 8: “I can master language X in a few weeks.”

🤣 Nope. Nada. Not likely. You may get a hang of the syntax, but master? That’s like saying you could become a Jedi in a few weeks.

Myth 9: “I learned HTML, and it was easy.”

Yeah, HTML may be easy, but that’s because it’s not a programming language despite what people say. HTML is a language to mark up the structure of websites… and it doesn’t work well without CSS (also not a language), and things like Javascript (which is a language) to make things dynamic. Programming languages implement logic like making decisions, and repetitive actions, HTML doesn’t do that.

Myth 10: “I’m a woman, programming isn’t for me.”

Why not? just because there are so many guys in computing? Ignore that, follow yours interests. Women were as much on the forefront of computing in its formative years as men (it’s just often conveniently forgotten). Actually some of the best programmers in my classes are women.

Myth 11: “Programmers sit in front of a machine all day.”

Programming isn’t all about machines, and it isn’t all about coding. It is just as much about coming up with designs, and new algorithms, as implementing them. Besides, these days you can work from just about anywhere. Some people find inspiration sitting in a cabin in Iceland, or on a beach. It’s what you make of it.

Myth 12: “The more tools I use the better programmer I am.”

No. Tools are fine, but sometimes the more tools you know, the less you understand about what is happening. A good example is programmers who eschew learning low-level stuff like the command line, instead opting only for interactive development environments. They don’t understand how things work at the lowest level, and so have less of an understanding of what is going on overall.

Myth 13: “I’m a cool programmer because I code everything on the fly.”

No, you’re not. All it proves is that likely you never followed instructions. You probably indent with two spaces or worse use tabs. Coding on the fly is okay for trying out small things, experimenting and the like, but it’s not good for large scale projects because it’s easy to miss things. People usually code-on-the-fly because they think they are cool. Big mistake. I see it when people try to translate code, and then wonder why they get in a mess… it’s usually because they have no clue what they are doing.

C++ – The Cobol of the 90s

A quote from The Unix-Haters Handbook, (1994, p.203-204).

… C++ misses the point of what being object-oriented was all about. Instead of simplifying things, C++ sets a new world record for complexity. Like Unix, C++ was never designed, it mutated as one goofy mistake after another became obvious. It’s just one bug mess of afterthoughts. There is no grammar specifying the language, so you can’t even tell when a given line of code is legitimate or not. … Comparing C++ to Cobol is unfair to Cobol, which actually was a marvellous feat of engineering, given the technology of its day.

When you could read about the whole Internet in a book

When I was in university in the 1980s, our view of the internet was just a simple band of networks in sporadic countries, linked by undersea cables that were probably used more for other communications than the Internet. There was no web. There was email, USENET news, and far distant sites you could ftp in to, for various reasons, mostly to download freeware and shareware for MS-DOS. Sites like SIMTEL-20.

In 1990 O’Reilly & Associates launched a book, “!%@:: A Directory of Electronic Mail Addressing and Networks“, which basically outlined every network in the world and how they were connected. Places like BITNet, CDNnet, USENet, and Internet. Yes, Internet had its own entry, as “US Research and University Projects TCP/IP Network”. Its description mentions that it began in 1982 when a series of networks like ARPANET, and MILNET were interconnected, and by 1990 connected to over 40 countries. The book talked about issues like addressing for emails, and architecture, and which other networks each network was connected to. At the end of the book was a list of second and third level domains by organization.

Every network had its own page.

You could email someone if you knew their email address. You could read the USENet news groups to keep up with what was happening in the world in any number of weird and wonderful newsgroups. It was the place to go to post a technical problem, or just some question in the myriad of stupid newsgroups. It was the only real connection to the outside world. We spent time downloading shareware from SIMTEL-20, then housed at the White Sands Missile Range with ARPANet access. It may have been one of the busiest sites on the early Internet. Sometimes the submarine cables would be down, and the “net” would effectively shut down.

Why C and C++ are dangerous languages (for novice programmers)

The problem with languages such as C and C++ is that they are inherently dangerous for those who don’t know how to use them properly. It’s sort of like allowing a novice forestry worker to use a chainsaw without any protective gear. Dangerous no matter how well they think they know how to use it. Chainsaws are amongst the most dangerous of tools, as C/C++ are the most dangerous of languages. Why? Because both languages allow things to happen that the novice may not anticipate. Take as an example the following C++ program which just processes an integer array of 10 elements. The problem lies in the fact that the loop that processes the array processes way more elements than what exist in the array. The problem? Nothing, except that the values it outputs are erroneous.

#include <iostream>
using namespace std;

int main() {
   int x[10] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
   int i, y;
   i = 10;
   while (i>-10){
      y = x[i] * 67;
      std::cout << i << "->" << y << endl;
      i = i - 1;
   }
   return 0;
}

What happens is when x[10] or x[-1] is accessed? Something crazy. The element x[10] exists outside the purvey of the array, and the results obtained are both unpredictable and may change every time the program is run. So accessing x[10] means the program accesses some random memory address, and retrieves some garbage value. The program assumes the value obtained is a number, and multiplies this random number by 67, ultimately printing out the value without nigh a hint there is a problem. Ellis and Stroustrup even point this out in their classic book on C++ [1] , “… the C array concept is weak and beyond repair.” Of course arrays are C at its utmost worst (not that arrays in C are really arrays). Case in point is this snippet of code from The Unix-haters handbook (p.192). It suggests that provided with the code:

char *str = "bugy";

Then the following equivalencies are true:

0[str] == 'b'
*(str+1) == 'u'
*(2+str) == 'g'
str[3] == 'y'

Which is just madness for the novice programmer. The arrays in C are just treated like pointers (you can debate this all you like, but a spade is a spade), and are not at all transparent. Sure the novice programmer can simply use str[x], but at some point they are going to encounter the other forms, and wonder, why? This coupled with C’s inability to check out of bounds makes arrays inherently problematic.

This can of course be problematic if a bad piece of code finds its way into a piece of software expected to perform some real-time function. Surprisingly it’s not the only problem, there are many. For the novice programmer, they are poor choices as introductory languages. Here are some of the simple issues.

  • Knowledge of C/C++ requires far too much knowledge of the internal workings of memory, e.g. stacks versus heaps, pointers, etc. Learning a to program should focus on language constructs are used to implement an algorithm, not on low-level programming.
  • C and C++ are very permissive about what they allow to compile. Problems like out-of-bounds arrays are not a problem, and a failure to indicate an error can lead to a level of complacency amongst novice programmers.
  • There is no inherent benefit to teaching OO to novice programmers. OO is an advanced methodology, and not a panacea for all programs.
  • Math is never simple. If you have two basic integers, each of which has 2,000,000,000 assigned to them, the value returned when they are added together is not 4 billion. The result of the code below is -294967296 when executed. Do the same in Julia, and you get 4000000000. These languages require far too much knowledge about appropriate datatypes.
int x, y, z;
x = 2000000000;
y = 2000000000;
z = x + y;

So why are these languages, and their descendants used so often in introductory courses? Probably because somebody thought it was a good idea. For academia’s sake likely because of the pointers, to help weed out novice programmers who didn’t really have a great handle on memory straight out of the gate. There is no doubt that C is a powerful language, but it likely should be introduced at an appropriate stage, not as a introductory language. C was designed as a language to be used, to build Unix and beyond (it’s no coincidence that in earlier books on Unix, C appeared in the Unix Programming section). C++ on the other hand, should likely be left alone. It basically just overlays OO concepts onto C.

Mody put it best in his criticism of C in 1991. He said he was…

“appalled at the monstrous messes that computer scientists can produce under the name of `improvements’. It is to efforts such as C++ that I here refer. These artefacts are filled with frills and features but lack coherence, simplicity, understandability and implementability. If computer scientists could see that art is at the root of the best science, such ugly creatures could never take birth.”

  • Mody, R.P., “C in Education and Software Engineering”, ACM SIGCSE Bulletin, 23(3), pp.45-56 (1991)
    1. The Annotated C++ Reference Manual