When designing a website, test *all* the browsers

People tend to forget that websites have to be tested on browsers. Even websites made on platforms such as Shopify. Here are two versions of the Rocky Lake Birchworks, one is on Safari, the other in Firefox. The problem is that the Safari version, the pull-down menu showing the sizes of birch syrup is actually hidden from view, although you can still click the area, and the drop-down list shows up… and there are no pictures… although they do appear after a couple of minutes. Why? Possibly there is something wrong with the image, or the way the code is set up.

The Safari version
The Firefox version

Proper browser compatibility can be satisfied through cross-browser testing of websites. Don’t assume that everyone will use the same browser as you do.

Why does no one write language critiques anymore?

Years ago when a new programming language such as Pascal appeared, people in the community provided some form of critical review (often in the form of a journal article). Where have those critiques gone? Why does nobody write formal critiques anymore? Is it because nobody wants to make waves? I guess the Internet has intervened in some respects, but the more “formal” review is often sadly amiss.

For example in 1973 Habermann wrote some critical comments on Pascal [1]. He listed things that were not incorporated into Pascal, how Pascal caused problems similar to other languages, and the major inadequacies of the language. In 1975 Lecarme and Desjardins [2] refuted some of Habermann’s responses.

  • The was no block structure – Habermann seemed to prefer the clean block structure of Algol-60.
  • There were no dynamic arrays – it caused an inconvenience.
  • There was a lack of shorthand inline conditional expressions.

Lecarme and Desjardins suggested that it is easy to continue to add constructs indefinitely. That the creation of endless lists of constructs is not a viable way of designing languages. They suggest that PL/I was a good example of taking this to the extreme, and that “…even its most irreclaimable addicts and most enthusiastic eulogists always seem to fine more constructs to incorporate in it.” [2]. They remind the reader that Pascal is a simple and concise language. Habermann then goes on to Pascal’s inadequacies, things like the “artificial unification” of subranges, types and structures, and procedures and functions not being “different enough”.

There are always different approaches to deigning languages, and Habermann’s opinions might reflect the fact that he was a proponent of Algol-60. Lecarme and Desjardins on the other hand were users of Pascal. Both had opinions, and both bolstered their arguments using examples, and references from other languages where things worked better/worse. Welsh and colleagues [3] wrote an article outline ambiguities (equivalence of types, scope rules, set constructors) and insecurities (variant records, functions and procedures as parameters, range violations, initialized variables, dangling references) which provides a further perspective.

Pascal was likely overly critiqued, but other languages did not escape. In 1988 Pohl and Edelson wrote a paper outlining C’s shortcomings [4]. It is hard to find such reviews today, and its sad to think that we don’t value this sort of discourse anymore. The sad thing is that people sometimes only want to talk about the virtues of languages (and things in general) without commenting on their vices. Maybe it’s not possible anymore given the belatedness of some languages.

  1. Habermann, A.N., “Critical comments on the programming language Pascal”, Acta Informatica, 3, pp.47-57 (1973).
  2. Lecarme, O., Desjardins, P., “More comments on the programming language Pascal”, Acta Informatica, 4, pp.231-243 (1975).
  3. Welsh, J., Sneeringer, W.J., Hoare, C.A.R., “Ambiguities and insecurities in Pascal”, Software – Practice and Experience, 7, pp.685-696 (1977).
  4. Pohl, I., Edelson, D., “A to Z: C language shortcomings”, Computer Languages, 13(2), pp.51-64 (1988)

The caveats of Julia for image processing (ii) – the image structure

Once Images and other image processing packages are loaded, inputing a colour image is relatively easy. What is somewhat perplexing is how the image data is returned and stored in Julia. From this perspective I am thinking of the novice doing image processing.

The image milano.png

Ideally, they would be able to access the “raw” data, stored in a simple multi-dimensional array. Look a grayscale image is just a 2D array, nothing fancy, easy to manipulate. The problem with img is that it isn’t a simple structure, or rather it is stored as a 2D array of pixels, where each pixel is a struct. Maybe that’s how it should be?

> using Images
> img = load("milano.png")
> summary(img)
"756×756 Array{RGB{N0f8},2} with eltype RGB{Normed{UInt8,8}}"

N0f8 is a special type that Julia uses to encode channel values in range 0–255 to range 0–1 (Normed(UINt8,8) is an 8-bit type representing 256 values from 0.0 to 1.0). This means that the RGB triplet of a pixel is (0.77,0.22,1.0). Each component of an RGB image is 8-bit, i.e. has values from 0-255. Making them real numbers is odd, and imprecise I would imagine. You can see what it does by running the following code for grayscale and colour images:

> reinterpret(N0f8, UInt8(197))
> reinterpret(N0f8, [UInt8(197), UInt8(56), UInt8(255)])
3-element reinterpret(N0f8, ::Array{UInt8,1}):

Sorry, I don’t like using floating-point numbers to represent pixels. Call me old-fashioned, I know how imprecise non-integers are. I get it, for some image processing it’s impossible to avoid floating-point calculations. There is supposedly some logic behind the use of N0f8, it’s just not very intuitive, in fact it’s quite the opposite. This may be because things have been over-thought. The work-around? Images does provide functions for people to get the raw data: channelview() and colorview(). The first splits out the channels of an image, the second combines them back together. Here we split the image img, and then extract individual slices.

img2 = channelview(img)
imgR = img2[1,:,:];
imgG = img2[2,:,:];
imgB = img2[3,:,:];

This provides a 3×756×756 floating-point reinterpretation of the N0f8 representation (img2), and three slices representing the Red, Green and Blue components of the image. But they are still floating-point. To change it to 8-bit, you have to do a further bit of magic which will create an Int64 representation. Note, don’t be tempted to change it to Int8 (-128..217), or UInt8 (0..255) (which is annoyingly stored as hex).

img3 = trunc.(Int,img2.*255)

Converting to other colourspaces is not hard, but the process does return a N0f8 image by default. The second line code below converts the N0f8 image to floating-point.

imgG = Gray.(img)
imgGf = Float64.(imgG)

Other conversions seem less taxing. For example converting to YCbCr or HSV returns a 2D array of pixels, each of which is still a struct, but at least normal floating-points.

imgYCbCr = YCbCr.(img)
imgHSV = HSV.(img)

Look, I understand why it seemed like a good idea to store an RGB image as a 2D array of structs, but manipulating the individual R, G, and B components is easier if they are a 3D array. I mean, it seems like the storage is less, with a single element inning taking up 1 byte, not surprising if the underlying storage of UInt8 is 1 byte. It might be easier if Julia didn’t store unsigned integers in hexadecimal. An unsigned 8 bit number just goes from 0 to 255, why change that?

Zero-based indexing considered harmful

Apart from goto statements, there aren’t many things that cultivate such heated discussion as the type of array indexing a language has. This article originally had the title “Array indexing: 0, 1 or arbitrary?”, but I thought I might spice it up by changing it to its current rendition. But the article isn’t about hitting on 0-based indexing. Typically languages take one of three options, allowing for 0-based, 1-based, or arbitrarily based array indices. Languages such as Python, C, C++, and Java have 0-based indexing, whereas Julia and Matlab are in the 1-based index club, and a whole slew of languages like Fortran and Ada prefer to allow the programming to make the decision, in cases even allowing negative indices. Is any one of them better than the other?

Some people can’t perceive of there being anything other than 0-based indices. For C it somewhat makes sense, as 0-based indices were are artifact of the fact the language uses pointers to store things, supposedly. More specifically, the expression array[x] in C refers to a memory location x elements away from the origin element, so the index is used as an offset. The first element is therefore array[0] to denote the fact that it is 0 elements away. But it likely wasn’t C that started the trend. C inherited its array semantics from its predecessor B (which called them vectors), which in turn inherited them from BCPL. Notably, BCPL supported 0-based indexing, and was itself derived from CPL (1963), which seemed to use arbitrary-based indexing [1]. Interestingly, in B [1], a vector v[10] allocated a vector v, of 11 consecutive words, v[0]..v[10], with the extra word being a pointer to the first element of the array.

The beast of two indexes

The truth of the matter is that before people started thinking about pointers and memory management, before the existence of Unix and C, many people were using either one-based or arbitrary-based indexing. That is before someone decided that array indexes should start at zero. It is therefore BCPL, created in 1967 by Martin Richards which seems to be the source of pure 0-based indexing. The BCPL was a typeless language close to the low end of machine does not make this all that surprising. Basically BCPL variables represented pointers to one of more consecutive words of memory [3]. So if b is a pointer, then b+1 points to the word after the one b points to, and hence b+0 points to the same value as b.

This is not surprising because as an array of 100 32-bit integers (4 bytes) elements is stored as 100 words at memory addresses: 1000, 1004, 1008,…, 1396. The element with index b, then can be calculated as 1000+(b×4). So it completely makes sense from the perspective of memory inside the machine. Languages that use 1-based indexing or arbitrary indexing just use some mechanism to offset the index. BCPL was compiled on an IBM 7094, one of the first machines with a time-sharing OS. Offset calculations for arrays were often done by the compiler, and hence 0-based indexing likely evolved to reduce the amount of compile-time required. This article by Mike Hoye outlines things a little more, but basically IBM’s use of computer time at MIT to calculate yacht handicapping meant that compiled programs had to be efficient, possibly contributing to the use of 0-based indexing.

Around the mid-1960s three essential approaches to indexing existed.

  • Zero-based indexing
  • One-based indexing – used some CPU instruction, or extra word to manage the offset.
  • Arbitrary indexing – range from X to Y, where X and Y can be positive or negative.

In reality, no one method of indexing is better than the other. There are situation where each method shines, and fails. In many languages a multidimensional array is stored sequentially. For example a 10×10 array, V[10,10], would be stored sequentially in memory from position v[0] to position v[99]. For languages that are row-major, v[0] to v[9] represent the first row, v[10] to v[19] represent the second row, etc. For column-major languages, v[0] to v[9] represent the first column, etc. So to find element (6,4) in the array using 0-based indexing means looking for element V[5,3]. This can be easily calculated using the equation 10r+c (where r=rows, c=columns), so 10×5+3 = v[53]. 0-based indexing works because if the element required is V[0,8], then memory position is v[8]. The same formula does not work for 1-based indexing, where V[1,9] would result in memory position v[19]. The formula would have to change to 10(r-1)+(c-1).

There is a reason why some languages choose 1-based indexes. From a scientific viewpoint, it may just be more logical. 1-based indexing in languages like Matlab and Julia may be because 1-based indexing is the convention for matrix notation. In languages using arbitrary-based indices, it may be the desire to provide the programmer with the ability to choose. Some problems lend themselves to use arbitrary indices. Take for example the Eight Queens Problem. In Niklaus Wirth’s classic take on the problem [4] he uses four arrays, two of which use arbitrary indices, b[2..16], and c[-7..7]. Why? Because as he explains the choice of indices was based on how the indices of b and c would be calculated. For the ⬋ diagonal of b, all the elements of each diagonal have the same sum of their coordinates 2..16. For the ⬊ diagonal of c, the coordinate differences are constant, -7..7. To have a language, Pascal which allowed this made for a natural implementation of the algorithm.

Consider the following piece of code written in C, which calculates the histogram of an image (which is 8-bit, i.e. 256 values from 0-255):

int img[1000][1000], hist[256];
for (int x=0; x<1000; x=x+1)
   for (int y=0; y<1000; y=y+1)
      hist[img[x][y]] = hist[img[x][y]] + 1;

Some argue doing this in a language such as Fortran is harder, but it isn’t really. Here’s the Fortran equivalent, which because it allows arbitrary indices, means the image can be 1-based, and the histogram 0-based.

integer, dimension(1000,1000) :: img
integer, dimension(0:255) :: hist
do x = 1,nrows
   do y = 1,ncols
      hist(img(x,y)) = hist(img(x,y)) + 1  

Supporters of 0-based indexing often say it’s more natural – in terms of memory addresses yes, in terms of human factors not so much. If an egg carton is empty, and somebody is asked “How many eggs are in the carton”, most likely the answer will be “none”, although some people may answer 0. If there are 12 eggs in a carton, the first egg is not egg zero, because it makes no logical sense – there are not 0-11 eggs in a carton, there are 1-12. Zero implies emptiness, nothing, nil, zilch, nix. Zero represents whether or not there are eggs in the carton. To have it represent the first element of an array then makes no logical sense from a human factors viewpoint. If there is one egg on the carton, it is not the 0th egg, it is the first. If there are no eggs in the carton, only the carton exists (this quickly turns into a discussion on the philosophy of zero).

Yes, there are instances where 0 exists in our lives. 24-hour clocks, and stopwatches start from 00:00:00, yet don’t stay there very long, and if something takes 0 seconds it didn’t really occur. Some people say rulers start at zero, but that argument is silly, because 0 is the edge of the ruler, and not really a measurement. A line of no length is not a line. Some people even put forth the argument of centuries – that the 1800’s is called the 19th century, however that is logical because it is the 19th century, i.e. the years 1-100 were the first century, 101-200 were the second century etc. Zero doesn’t even have a place (and there is no year 0). There are lots of these examples of numeric things in our lives, but none really relate to indexing, they just relate to zero.

There are many people who vehemently opposed to the idea of there being anything but zero-based indexing, and go to great lengths to justify their cause. They are often the same people who believe there is only one true language (whatever that may be). The reality is, who really cares what type of array indexing is used? Either that or go with an index-nonspecific language.

0-based indexingC, C++, Common Lisp, Haskell, Java, Oberon-2, Perl, Python, Ruby, Swift
1-based indexingAWK, Cobol, Fortran (def), Julia (def), Lua, Matlab, Algol-60, Algol-68
arbitrary indexingAda, Pascal, Fortran, Julia, Modula-2
Languages and indexing (def=the default setting)

  1. Barron, D.W., Buxton, J.N., Hartley, D.F., Nixon, E., Strachey, C., “The main features of CPL”, The Computer Journal, 6(2) pp.134-143 (1963)
  2. Kernighan, B.W., “A Tutorial Introduction to the Language B”, Bell Laboratories (1973).
  3. Richards, M., “The BCPL Reference Manual”, MIT, Memorandum-M-352 (1967).
  4. Wirth, N., Algorithms + Data Structures = Programs, Prentice-Hall (1976).

Coding Cobol: A bubblesort

How easy is it to do a Bubblesort in Cobol? Not that challenging. Below is a Cobol program which prompts the user for the name of a file containing real numbers, and sorts them. The first piece of code below sets up the environment, and the information for the input file. The filename given by the user will be stored in fname-inp.

identification division.
program-id. stats.

environment division.
input-output section.
select input-file assign to dynamic fname-inp
       organization is line sequential.

data division.
file section.
fd input-file.
01 sample-input     pic x(80).

The next section is the working-storage, which includes a bunch of variables for calculations, the input-value used to store the data as it is read from file, and the array x, where the data is ultimately stored.

working-storage section.
77 n          pic 9999 value 0.
77 feof       pic A(1).
77 temp       pic s9(14)v9(4) usage is computational-3.
77 fname-inp  pic x(30).
77 i          pic 9999.
77 j          pic 9999.
77 jp1        pic 9999.
77 jp2        pic 9999.

01 array-area.
   02 x pic s9(14)v9(4) usage is computational-3
      occurs 1000 times.

01 input-value.
   02 in-x   pic s9(14)v9(4).
   02 filler pic x(62).

The first part of the actual workings of the program, prompts the user for the filename, and reads the data into the array x.

procedure division.
   display "Input filename? "
   accept fname-inp.
   open input input-file.

   perform input-loop until feof='Y'
   perform bubblesort.
   perform print-nums.
   perform finish.

   read input-file into input-value
      at end move 'Y' to feof
      not at end
         add 1 to n
         move in-x to x(n)

The final portion of code includes three “subprograms” to perform the Bubblesort, print the array, and finish the program. The bubblesort just uses a normal Bubblesort algorithm. The perform statements take the place of Bubblesort’s usual for loops.

   perform varying i from 1 by 1 until i is greater than n
      compute jp1 = n - i
      perform varying j from 1 by 1 until j is greater than jp1
         compute jp2 = j + 1
         if (x(j) > x(jp2))
            move x(j) to temp
            move x(jp2) to x(j)
            move temp to x(jp2)

   move 1 to i.
   perform until i > n
      display i "->"x(i)
      add 1 to i

   close input-file.
   stop run.

The only difference is that instead of writing x(j+1), a variable, jp2, has to hold the value of j+1. If an attempt is made to use x(j+1), a compiler error of the form “error: ‘x’ requires one subscript” will occur.

The caveats of Julia for image processing (i) – overview

I’ve been using Julia for about four years now. Early on I really liked it, now I’m not so sure, and it’s not because I think its any less a language than it was before. When I started out using it, it was the pre-V1.0 days. I thought that it would stabilized somewhat when it reached V1.0, at least for a small while. I recently installed it on my new Apple MacBook Pro with the M1 chip. It (version 1.5) installed easily, no qualms what-so-ever. Now I’m mostly interested in using it for image processing, and a couple of years ago I started work on an image processing library… then I stopped work because of the “unstable” environment with respect to new versions. The version hopping hasn’t really abated. I see V1.6 is in beta, and V1.7 is in development. Why, why oh why? Maybe stick for one version for a year, then release another version. With every new release I have to wonder if my code stills works, or if some small change will break it. If I code stuff in Fortran 95 at least I know what I’m getting.

One of the reasons I chose Julia over the likes of C to do image processing it the number of built-in functions to do things like statistics (yeah, Java and C++ have libraries too, but I dislike these languages). I tried Python for a while, but its tooooooo slow. I also like the ability to write plain code (e.g. two nested for loops), and not have to vectorize things just to make them work faster (here’s looking at you again Python). The ability to slice arrays is awesome as well. What I am not enjoying so much is the inherent complexity of the language. There is almost too much crammed into the language. Some of this complexity added by people who implement add-on packages. They usually add a bunch of OO stuff, and don’t always make things clear, and the documentation is often less than useful. Now packages are great, they help people get on with their work, and not have to write things from scratch. Julia also has a cool built-in package manager which makes installing them *awesomely* easy. The help functionality of Julia is also cool. But there are some things about packages which are annoying.

Take for example the JuliaImages package. I installed it, and started using it to rewrite a Retinex algorithm from Matlab to Julia. I got it working, but it took some effort. Here are some of my observations. Firstly there are a lot of packages under the umbrella of JuliaImages. Installing the basic package Images.jl just installs an overview, if you want additional functionality you have to add it. So in the end you can end up adding a whole lot of packages (thankfully Julia’s package manager makes them easy to manage). One real bugbear is I/O. It specifically says that an I/O backend is required to load and save images, because that package is not integral to JuliaImages. Images includes a front-end FileIO.jl which chooses the appropriate back-end, depending on the file-format. This allows support for an incredible diversity of file formats, but it seems kind of clunky.

The image automatically displayed using ImageInTerminal

Another issue is image display. There are a number of packages that support image display, one of the most interesting is ImageInTerminal, which supports image display in the terminal (the displayed images are downscaled to fit into the size of the current active terminal session). But image display requires another series of packages… it just seems like too much sometimes. The other issues have to do with comprehension. The documentation for Julia in general can sometimes be lacking in simplicity. For the novice Julia user, trying to wade through technical documentation can be challenging, as examples are often vague or non-existent. There is also the problem of the horrible error messages that can be generated when scripts are run. For the experienced programmer, they are not an issue, but that is because they understand how an error can be traced.

Coding Cobol: Checking a file exists

In Cobol, when a program tries to load a file that doesn’t exist, the program generally exists with a message of the form:

libcob: error: file does not exist (status = 35) for file ifile ('student_txt' =>
       10  stdnt-idno  pic x(7).

This is kind-of messy. So how do we check for the existence of the file? Using the built-in function “CBL_CHECK_FILE_EXIST“. It requires two parameters: the file name, and a structure to hold information about the file. The program below checks to see a file exists. If the file doesn’t exists, a error message is output and the program terminates. If a file does exist, it prints out the name, size, and year created. The actual checking is done by the paragraph check-file-exists. The structure file-info is required, and holds the information extracted from the file.

identification division.
program-id. fileio.

environment division.
input-output section.
select ifile assign to dynamic ws-fname.

data division.
file section.
fd ifile
    record contains 88 characters.
01 student-info.
   05  student-name occurs 4 times.
       10  stdnt-name  pic x(15).
       10  stdnt-idno  pic x(7).

working-storage section.
01 i                   pic  9.
77 ws-fname            pic x(30).
01 file-info.
   05 file-size        pic x(8) comp-x.
   05 file-date.
      10 f-day         pic x comp-x.
      10 f-month       pic x comp-x.
      10 f-year        pic xx comp-x.
   05 file-time.
      10 f-hours       pic x comp-x.
      10 f-minutes     pic x comp-x.
      10 f-seconds     pic x comp-x.
      10 f-hundredths  pic x comp-x.

procedure division.

   display "Filename? ".
   accept ws-fname.
   perform check-file-exists.
   stop run.

   call "CBL_CHECK_FILE_EXIST" using ws-fname file-info.
   if return-code not equal zero then
      display "Error: File " ws-fname(1:20) " does not exist"
      display ws-fname(1:20) "-> size: " file-size ", year: " f-year

A thought on COBOL

Teaching someone the rules of a language and expecting that person to write a good program is like teaching someone to type and expecting great literature. it applies to compiler writers as well as applications programmers. Pascal compilers are written in Pascal, C compilers are written in C, but COBOL compilers are never written in COBOL. As a result, COBOL compilers are technically correct but lack a “feel” for the language.

Robert Wagner, “COBOL Pride and Prejudice”, Computer Language (1984)

The Swiss Army Knife Syndrome and ALGOL 68

Language design has always been an interesting thing. Some languages were designed for a singular purpose. Others were designed to be everything to everyone. It is the latter we are interested in here, and they take what is considered by some to be the “Swiss Army Knife Syndrome” (SAKS) approach to language design, add as many features as possible. This is somewhat of a misnomer, because the more traditional Swiss Army knives only have a few features, and only those of relevance. Regardless SAKS often results in languages that for one reason or another become somewhat unwieldy.

As Niklaus Wirth postulated in 1985, that “The Swiss Army knife idea has its merits, but if driven to excess, the knife becomes a millstone.” [1]. He was alluding to both his language ALGOL-W, and PL/1, complex languages unable to be properly implemented on inadequate computers.

The first language which likely succumb to SAKS was likely ALGOL-68, a block-structured procedural language designed by committee (a *large* committee). One of the inherent problems with this approach to language design is that feature creep can occur in order to placate some committee members wants and needs. In ALGOL-68 even the simple things were unsimplified. A good example is modes. A mode is just a data type, and Algol68 provided a bunch of them: real, int, compl, bool, char, bits (a packed vector of bool), and bytes (a packed vector of char). On top of that ALGOL-68 provided modifiers, so instead of double, one would write long long real (seem familiar?). There was also the string, and sema (semaphore), and a series of declaration symbols: long, short, loc (allocate variable free space on the local stack), flex (declare an array to be flexible), and heap (allocate variable space from the heap). See how it suffers from SAKS?

Tanenbaum, wrote in 1976 that “In its short lifetime, ALGOL 68 has acquired something of an international reputation for being obscure.” [3]. ALGOL-68 excelled at making everything inherently more complex, sometimes in the context of simple things like readability. In this comparison of an ALGOL-60 and ALGOL-68 procedure, which is more readable? Round brackets in ALGOL-68 replaced the begin and end of ALGOL-60 (because using parentheses for functions isn’t enough, let’s make it more confusing!).

ALGOL 60:  boolean procedure f(x,y,z); value y;
                 real x,y; integer array z;
           begin ... end;

ALGOl 68:  proc f = (ref real x, real y, ref[]int z)bool:
           ( ... );

ALGOL-68 was designed just before Pascal, both were in some way intended to be successors to ALGOL-60. Pascal was heavily used throughout the 1970s and 80s, ALGOL-68 was not. Partially this is because Pascal was a much simpler language than ALGOL-68. The orthogonal design [see note at bottom] adopted for use in ALGOL-68 was conceptually correct and very attractive, but resulted in complexity, both in the language and in trying to implement it.

ALGOL-68 was considerably more difficult to learn than Pascal, which was partly due its weird terminology. For example it used “mode” to relate to data types, and “multiple value” as the term for array. Tanenbaum [3] provides an excellent tutorial on ALGOL-68, showing its benefits (and limitations). Here is an extract of an example he shows for arrays:

begin int n,m;
   [-n:n] int hamlet;
   [1:m,1:n] real macbeth;
   [1:10,1:10,1:10] bool othello;
   [1:10][1:5,1:5] int king lear;

This seems uncomplicated, except for the nomenclature, where a 2D array such as macbeth is called a “row row of real”, and a 3D one like othello is called a “row row row of boolean”. The array king lear, is a 10×[5×5] array of integers.

The document that defined the language is very elaborate and precise, but it is also much more challenging to understand. One author [4] described the report as being “… one of the most unreadable documents which has ever been printed.” Dijkstra may have had the harshest of opinions commenting on the Algol 68 draft report MR93 [5] – “On account of the draft report my faith in WG.2.1 is very low. The draft report is thick and difficult, in fact too thick and too difficult to inspire much confidence.” At the end of the day, ALGOL-68 was supplanted by other simpler languages. To get a feel for ALGOL-68 programming, you can download a version of the ALGOL-68 Genie compiler. Below is a program to calculate the Factorial, using the Genie compiler. A brief overview of the language can be found at ALGOL 68 – A Retrospective.

PROC factorial = (INT n)LONG LONG INT:
    LONG LONG INT factorial := 1;
    FOR i FROM 2 TO n DO
       factorial := factorial * i

P.S. There are many reasons why languages like ALGOL-68 floundered. One core thing that set it apart from the likes of most of early Pascal and C was the fact that it was implemented differently on different systems, which means programs were not always 100% compatible.

NOTE: Orthogonality is an interesting design concept, mainly because few people really understand it. In a pure orthogonal design, operations do not have side effects, each action changes just one thing without effecting others. For example, a C function cannot return a static array, making it un-orthogonal. ALGOL-68 could return anything, as can languages like Julia.

  1. Wirth, N., “From programming language design to computer construction”, CACM, 28(2), pp. 160-164 (1985)
  2. Tanenbaum, A.S., “A comparison of PASCAL and ALGOL 68”, The Computer Journal, 21(4), pp.316-323 (1978)
  3. Tanenbaum, A.S., “A tutorial on ALGOL 68”, Computing Surveys, 8(2), pp.155-190 (1976)
  4. Valentine, S.H., “Comparative notes on ALGOL 68 and PL/I”, The Computer Journal, 17, pp.325-331 (1974)
  5. Dijkstra, E.W., “To the EDITOR ALGOL 68” (1968) (discussion of MR93, Draft Report on the Algorithmic Language ALGOL 68.