Cognitive load and arrays

Arrays always seem challenging for novice programmers because their aren’t too many parallels in real life. The closest we come are eggs in a carton (2×6), or wine bottles in a case (3×4), and these are two dimensional arrays – one dimensional arrays seem more elusive. Pez in a dispenser act more like a stack. So how does one convey the idea of a container which holds only one type of item? The closest one gets to an illustrative example is something like a divided storage boxes, but even then, it illustrates a container with a lid, which arrays don’t have. Arrays are essentially containers without walls, or lids, or physical constraints (leaving memory constraints aside).So while arrays are challenging to conceptualize, they can also be challenging to implement. I have talked previously about the use of 0-based indexing. Whilst it might have seemed natural to Dijkstra, the average person learning to program does not understand why a container has portions that are marked as “division 0”. First let’s consider arrays in C. Here is an integer array with 100 elements in C:

int a[100];

The problem with this implementation is that there is no identification of the concept of an array. This is largely because the specification is only done with the square brackets. There is also no indication that indices start at zero (although in C, this is the norm). C likely has the highest cognitive load because there is no intuitive information for the novice. Consider as an alternative, how other programming languages specify arrays.

Pascal    a : array[1..100] of integer;
Ada       a : array (1..100) of integer;
Fortran   integer, dimension(1:100) :: a
Julia     a = array(Int64, 1, 100)

In all cases, the type of the array is clearly specified. In the case of Pascal, Julia, and Ada, the term array is used to indicate that it is a container.  In all cases the range of array is clearly indicated, and in addition, this often also indicates the indices which can be used. In the case of Fortran, there is no explicit use of the term array, however it does use the adjective dimension to specify the range of values.  Also, not all languages are limited to starting at index 0, like C. Pascal for instance allows for both positive and negative indices to be used, allowing the language to conform to an algorithm, rather than the algorithm conform to the language.

In all the cases above it is easier for the novice programmer to understand that a is an array. Specifying the range of indices for the array is also easier for the novice. Take for example the Ada array. Creating a loop to process this array, giving each element a value is simple:

for i in 1..100 loop
   a(i) := i;
end loop;

This is due in part to the loop being quite easy to understand. Consider the same loop in C:

for (i=0; i<100; i=i+1)
   a[i] = i;

This loop is just not as intuitive for the novice programmer, partially because it uses the indices 0 to 99. What is likely to happen is that the novice programmer might write a loop in one of the following forms:

for (i=0; i=100; i=i+1)
for (i=0; i==100; i=i+1)
for (i=0; i!=100; i=i+1)

The first will result in an infinite loop, the second in no iterations of the loop, and the last one will work, but is very unintuitive. Here there is a lot of scope for the novice to make errors, whereas it is much more difficult using the Ada loop. Consider now arrays in Python, often touted to be a very easy to learn language. Firstly, as Python has no natural arrays (only lists), an array must be created using Numpy. Here is the same specification as for the languages above, an integer array with 100 values.

a = numpy.arange(0, 100, dtype=np.integer)

First there is nothing which intuitively screams “array”. Next, and probably most problematic for the novice programmer is the fact that the range must be specified as 0..100+1, because specifying “0..99” would result in an array with 99 elements.

Of course there are also issues with using arrays, and the structure of the arrays. The use of square brackets, [ ],  may be more intuitive than simple parentheses. So languages like Pascal, Julia and C are likely better than Fortran and Ada here because it is easy to confuse the use of array parentheses, with those used in function calls, or expressions. When one extends the arrays to multiple dimensions, how this is specified also contributes to cognitive load. C does this using separate brackets, e.g. a[i][j], which is less intuitive than Fortrans use of the integrated index, a(i,j). Finally there are issues with what can be done with arrays. In some languages such as Fortran, setting the entire array to a particular value is easy. For example, in Fortran, the following code, creates a 20×20 integer array (theArray), sets the whole array to the value 10, and then sets the central 8×8 region to the value 5:

integer, dimension(20,20) :: theArray
theArray = 10
theArray(7:14,7:14) = 5

This is achieved through whole-array operations, and array slicing, which are convenient features of many programming languages. In C, unless you want to set the whole array to zero, there is no easy way of performing this task, and array slicing is not allowed. This means the code in C might look something like this:

int i, j, theArray[20][20];
for (i=0; i<20; i=i+1)
   for (j=0; j<20; j=j+1)
      theArray[i][j] = 10;
for (i=6; i<=13; i=i+1)
   for (j=6; j<=13; j=j+1)
      theArray[i][j] = 5;

Which one of Fortran or C is easier for the novice programmer to understand and implement?

In conclusion, it seems like languages such as C, and Python increase the cognitive load for novice programmers, partially because the use of 0-based indexing is not intuitive, and partially because of the ease in making errors when it comes to specifying loops to manipulate the array. Other languages provide a better basis for understanding arrays. This and the added features of languages such as Fortran and Julia which allow array slicing mean that the novice programmer can concentrate more on the problem solving aspects of their algorithm, and less on the syntax of the language.

 

 

 

My “history of food” course in winter

In the winter semester, I will be teaching a UNIV*1200 first-year seminar course titled “Life on a Plate”, which is a history of food. Here is the course description:

Do you like to eat? Are you interested in learning more about the food you eat? Everyday life in punctuated by the innate need to eat. What and how we eat frames our history, and forms an important part of our identity. Nothing is more essential to human life than then food, without it we cannot survive. But if it were just pure sustenance and nutrition, this could be easily achieved. Food is more than that, in many societies food is an integral part of everyday life. This course will explore the world of food, from paleolithic diets to molecular gastronomy. It will draw upon historical and geographical contexts, and will allow students to consider how food is produced and consumed, how what we eat has changed over time, and how we have come to eat what we eat.

Why is a computer scientist teaching a seminar on the history of food? Why not? I have been cooking for the better part of 40 years, and I embrace food and cooking above most other things. Partially it is because food is the one thing that binds all humans together, and even the human, and animal/insect worlds. We must all eat. This will be an experiential course that will investigate the impact food has had upon us since the dawn of our time on the planet.

 

Pascal’s syntactic ambiguity and C’s ineptitude

There are two ways of designing a programming language: by committee, and individually. The most successful of the one-person languages is likely C, although Python is up there too, as is Pascal, from a sheer tenacity perspective. The true indication of a language designer is one who has the capability to critique their own language, and understand where its limitations are. Pascal is an exceptional example of this. If there was one feature of Pascal’s syntax which was likely its biggest ambiguity, it was the structure of the if-else statement. Niklaus Wirth adopted the if-else statement from Algol 60, including the dangling-else problem which had long been identified as an issue. Wirth called the inheritance of these ambiguities a “deadly sin”, and specifically cited the lack of “explicit closing symbols for nestable constructs”. He was referring of course to this:

IF b0 THEN IF b1 THEN S0 ELSE S1

Which could be interpreted in one of these two forms:

IF b0 THEN [IF b1 THEN S0 ELSE S1]
IF b0 THEN [IF b1 THEN S0] ELSE S1

Wirth goes on to say that “…one should not commit a mistake simply because everybody else does, particularly if there exists a known, elegant solution to eliminate the mistake.”. Wirth took what he learnt in Pascal, mistakes and all and designed Pascal’s successor, Modula-2. Modula-2 fixed the dangling-else problem, by adding structure terminators.

if oceanRise > 1 then
   WriteString("flooding will occur");
end;

Other languages, most notably C still maintain the sort of architecture where dangling-else is present. It will likely never change. The presence of such ambiguities does present a problem for the novice programmer who does not know how to navigate around them. Yet C could have solved this by requiring curly braces around sub-statements… which is exactly what Switch did. No more syntactic exceptions for a single line of code. It’s more of a pity that all of Java, C#, JavaScript, Objective-C, and C++ inherited this monstrosity. Something identified in the 1960s, yet never completely dealt with (except in the likes of Fortran, Ada, Julia…)

Pascal – a language for novice programmers

The programming language Pascal was designed in 1969 in the spirit of Algol 60 with a compact syntax representing the paradigm of structured programming. But not everyone liked Pascal. Brian Kernighan famously wrote an article on “Why Pascal is Not My Favorite Programming Language“, in 1981, citing Pascal as “just plain not suitable for serious programming”. But then it never really was designed for that. Pascal was a language designed to teach people programming, and for that purpose it was highly successful, even if commercially the language never really evolved. Why was it so successful? Partially this can be attributed to the fact that most courses that taught programming in the 1970s through to the 1990s focused on teaching a construct, for example making decisions, and using Pascal to illustrate the concept. The focus was more on the craft of programming rather than the language. In many respects this evolved out of the need for a simple language. The alternatives at the time were BASIC, which was hardly a usable language for beginners (regardless of the fact that it was suppose to be); FORTRAN, antediluvian, or ALGOL68, pure madness. Pascal also represented the new way of doing things – the structured way. Fortran was not structured, neither was BASIC, and although Algol68 was, it was just too complex a language for novice programmers (and maybe experts alike).

One of the first things one learns with programming, after a quick introduction to variables, is compound statements – how to group activities together. In Pascal this was achieved using the begin and end statements. The begin and end were to Pascal, what parentheses are to mathematical expressions, and frankly the words begin and end are readily comprehensible. When C replaced English words with the squiggly brackets, { and }, the usability of the language suffered because there was no context for them. The fact that { implied begin, and } implied end had no meaning to the novice programmer. Why not use the parentheses? Largely because begin and end are just easier to understand and follow.

Similarly it is easier to understand what an integer is, versus int, or a real is versus float and double. For the novice programmer there are other niceties in Pascal, and that includes control structures, even though Kernighan once said ” The control flow deficiencies of Pascal are minor but numerous – the death of a thousand cuts, rather than a single blow to a vital spot.“.  Loops are easy to understand:

for i := 1 to 100 do 
   i := i + 1;

This is arguably simpler than what is offered by C-like languages, where the for loop is somewhat deconstructed. For example:

for (i=1; i<=100; i=i+1)
   i = i + 1;

For the novice programmer, this form of loop forces them to focus far too much on the structure of the loop, rather than the code the loop is controlling. There is no doubt the the C loop is powerful, but the novice does not need power. In the C loop, it is easy for the novice to make mistakes in any one of the constituents of the loop, a common form being the “off-by-one” error. The same can not be said about the Pascal loop, where the value of i is 1 to 100. Pascal also offers the while-do and repeat-until loops. Their functionality is similar to that of C, however distinctly delineates between a loop that tests the condition before the loop body, and after the loop body (As opposed to while, and do-while in C, which always causes confusion in novice programmers).

Not everything in Pascal helps the novice. The largest problem, especially from the perspective of cognitive load, is the semicolon, which is used in Pascal as a statement separator (rather than a terminator as in C). Traditionally, Pascal code like this would not have been allowed:

if a < 100 then begin
   x = a * a;
   writeln(x);
end;

Why? because the semicolon after the writeln statement would not have been allowed. Newer compilers like fpc have softened some of these rules and allow semicolons in these situations.

The purpose of a programming language is to provide a set of rudimentary instructions that allow the programmer the ability to create a program to perform any task. Pascal, while not being perfect performs this task quite admirably. It’s a pity we gave up on it as an instructional language so soon.

Can an algorithm find the pattern?

Consider this picture from Creative Computing Magazine (January 1978). Look closely and you will see a pattern. It represents nicely the human visual systems ability to find patterns in simple drawings. How easy is it for  an algorithm to find the pattern? Likely not that easy, without substantial training afforded to this one particular problem. Some things aren’t reproducible by machines – and maybe it should stay that way. Otherwise why do we even exist?

 

 

Why technology may not save us

While the world changes under a maelstrom of climate change, we begin to ponder what life will be like on the “future earth”. The earth will survive – it’s gone through more traumatic changes in the past, although it may not look the same anymore. It’s humans and their infinitive search for a “better life”, that may suffer. Look humans aren’t the smartest of creatures. We build cities on fault lines, or on floodplains (note the word flood?), we are still astonished by forest fires (most of which are lit by humans), and we have filled the oceans with plastic. A “better life” may have existed long before the world became what it is today, and people were more content with what they had. Every day there is some new report of technology that will help save the planet – most are just ideas though – much like the moving sidewalks or atomic trains of the 1950s.  We could build technology to clean up the plastic in the oceans, but instead we think more about colonizing Mars than fixing the place we live on (likely so that Mars can be strip mined, because frankly there is exactly any flora or fauna to worry about disturbing there – that we know about). I doubt cleaning up plastic is a hard task, but there has got to be a want to do it. We could build more energy efficient, smaller houses, and build more efficient, rapid transit. That’s not rocket science – but everyone has to do their part.

When we had less technology, we had fewer problems.

 

Why most institutions of higher learning are antediluvian

From the perspective of education, universities are the epitome of old-fashioned. We still do lectures, the same way they were done 50 years ago. Yes there is technology involved, and some in-class activities, but the general notion of how we teach students has not changed. The only thing that has vastly changed is the size of classes. The problem is that this form of learning may work in small classes, say less than 20, but large classes don’t work so well, and they never have. Why? because for hundreds of years before, instruction was very much hands-on, and individualized. One did an apprenticeship in some trade and learned all there was to learn, over a period of time (anywhere from 5-12 years). What we have created in society are mills, similar to what happened in the industrial revolution when the work performed in craft industries based in the home transitioned into huge mills. Things were made more efficient, and cheaper, but the end result was maybe not as good a product?

Universities have gone the same way during what we could term the “educational revolution”, from the late 1970s to now, where the emphasis in western society has been placed on higher education above all else. We have also moved from a simple 3-year degree to one where students can spend upwards of five years in an institution of higher learning. What are they learning? To be honest I don’t know. Can one learn anything in a class of 300, 500, 1000? Can one learn anything much from reading textbooks? I doubt it. Much of what I learned I taught myself, programming included. I mean it’s no wonder people find programming boring, because the programs often used to illustrate concepts in computer science are boring – but to be completely fair, it’s impossible to develop an app in a semester, even two.

But, I digress. The real problem is that there is very little experiential learning, and certainly nothing of the sort when it comes to most university experiences. I guess that’s what coop is for, or maybe a semester abroad. One could learn many things from books, because some books are good (some textbooks are rubbish of course, not *all* books are meaningful). But you can’t learn things like design completely from books, and solving problems is often more associated with experience, and “thinking outside the box”, than anything else (higher education is often thinking more “inside the box”). There are of course university educators willing to take the leap of faith, and invest in experiential learning. The problem is cost, and likely an unwillingness of institutions of higher learning to actually change to any great extent. There are *some* institutions that seem to understand the message. One of those is Quest University Canada, with out of the box thinking like the “block plan”, where you only do one class for a set period, integrate experiential learning, small class sizes, and only offering one degree. Yes, some will argue that these things are not achievable in a public institution – but why not? Maybe if we shed this mantra of the lecture-based instruction, we could produce a better learning environment.

Decaying apps

We don’t think much about software decay, or when we do we think about old software, written in Cobol, running on mainframes. But the sad thing is decaying software is all around us. In fact most of us carry such software around with us every day – in the guise of mobile apps. Before the panorama feature came to iOS, there were a number of panorama stitching apps, including AutoStitch. Now Panorama is great, but it only allows for a panorama 1 photo high, whereas AutoStitch allows for at least 2 photos in height. The problem is AutoStitch still works in pre-iOS 11 devices, but will not work in the 64-bit world of iOS 11. How many apps have you had to delete because of the following message?

Developers may just stop updating the app, or maybe they get bought out by a bigger company, who stops supporting the app. For whatever reason, you are left with decaying software, only viable while the current environment remains static. Upgrade the OS, or the hardware, and it might be curtains for the app.