A simpler life with less technology

I wonder if it is not time for a simpler life, one with less technology. There are parts of technology I like, for instance being able to blog, and digital cameras. But honestly our lives are too wrapped up in technology. I blog because I like to write, not because of the technology. In another time I would just have just kept a journal. Were we happier when not everything in the world was thrown at us every day? Is ignorance bliss? Maybe.

Because everything we tend to do becomes more complicated over time, for it seems like that is the way of progress. Cars worked quite fine without dozens of CPUs inside them, and they were much easier to fix, but for some reason we continue to add more and more things. Ironically this makes them complicated to the point where the average person can no longer fix them. It’s the same with everything else. “Smart” microwaves with 1001 features, intelligent fridges. We don’t need any of them. Why?, because past a certain point there isn’t much in the way of added benefits to more technology.

Companies always think that people want more tech, but want people want is more organic design. By organic, I mean built in such a manner that using the device is trivial, with only the features required for it to do its job. Washing machines are a great example. Ever wonder why machines in laundromats are so simple to use? Because nobody wants complicated machines. Washing machines at home are often super complicated with 1001 features, most of which will never be used. Nobody needs 100 different wash cycles. But these machines are designed by engineers who likely have never actually used a washing machine, and have little or no understanding of the “Keep It Simple Stupid” principle of design. I’m reminded of this when I watch a video of someone who is able to rebuild a tractor that has sat in the forest for 20 years. Mechanical devices are able to be fixed, because they are simple and rely on very few electronics. The same could not be said of electronics-laden devices. It’s the same reason I can pick up a 1960’s era SLR and still use it.

Beyond a certain point, there is little or no benefit to adding more technology to a system. Sometimes it just ends up adding layers of confusion. For example smartphones are not really that smart. Every incarnation adds very little in the way of improvements, but leaves more orphaned apps in its path. About the only thing that really gets a little better is a marginal improvement in battery technology, and better cameras. Sure, the computational photography used by some smartphones is pretty neat, especially to deal with situations like low-light, but the overall image quality is still not as good as a good digital camera (nor will it likely ever be).

The world has too much technology in it, and there are many times one could just shut the world out and be happier I imagine.

Memories of Fortran II

What about if you want the array to hang around? Add the save attribute to the array. For example, consider the following subroutine, alloc_arr(), where save is used to maintain the array arr.

subroutine alloc_arr(n)
   integer, intent(in) :: n
   integer, dimension(:), allocatable, save :: arr
   if (.not. allocated(arr)) then
      do i=1,n
         arr(i) = i
      end do
      do i=1,n
         arr(i) = arr(i) + 2
      end do
   end if
   print *, arr
end subroutine alloc_arr

Here a check is made to make sure the array arr has not already been allocated, if not, it is allocated and assigned the values 1..n, where n is the size of the array to create. If the subroutine is called again, and arr already exists, the value of its elements are incremented by 2. For example if alloc_arr() is called three times, the output looks like this:

           1           2           3           4           5           6
           3           4           5           6           7           8
           5           6           7           8           9          10

The magical save attribute can also be applied to other variables, for example to store the state of a counter. Here is another program which counts the number of calls of a recursive function.

recursive function factorial(n) result(r)
   integer, intent(in) :: n
   integer :: r
   integer, save :: nr = 0

   if (n == 1) then
      print*, "No. calls =", nr
      r = 1
      nr = nr + 1
      r = n * factorial(n-1)
   end if

end function factorial

Here there is one saved variables, nr, which counts the number of function calls. The counter nr will increment each time there is a recursive call (the first call to factorial() is not recursive). The variable nr will be maintained, even when the function terminates.

Do or do not…

Yesterday someone informed me that it looked like someone in one of my courses had posted job requests on Upwork.com for two of the assignments. I know it’s not from any other course anywhere because the course is unique, as are the assignments. Unfortunately someone thought the easy way out would be to pay someone to do their assignments for them, something known as contract cheating. Lucky for them its not traceable, so they got away with it. This time.

But probably in the future they won’t be as lucky. You see, while someone can fake the ability to code in higher education, it’s not so easy when they start working somewhere and their abilities don’t match their resume. If they can’t program simple assignments, they’re going to get hammered in industry. They probably shouldn’t even get a degree in computer science, but somehow they will.

Some people will argue that there may have been a good reason why they had someone else write their code. I don’t buy that argument, because instead of asking for help the person prefers to cheat. They are ultimately letting themselves, and their classmates down. Someone in the future might have to work with this person in a group project, not realizing that they might take whatever shortcuts needed to progress (likely at the expense of their peers).

Part of the problem is that cheating seems to be an integral part of computer science. This could be alleviated by making programming assignments a small part of the assessment, say 20%, and have the rest as exams. Not ideal, because part of the issue with most CS courses is seeing the student has the ability to actually produce something tangible. We could also require students to perform end-of year oral exams, where an examiner poses questions to a student in spoken form. Alternatively we could stop sugarcoating academic misconduct by making harsher penalties – by assigning an FWP – “Fail with prejudice”. Or we could change the culture to make academic dishonesty a practice which is ostracized within the computing community.

Look computer science isn’t easy. Solving problems and programming takes effort, it’s not just about regurgitation of information. It’s not for the faint of heart, and no one can expect to do well in the field if they aren’t passionate at it, or at least try.

Timing a Python program

How does one time a Python program? There are a couple of ways. The first is by using the function time() from the time library. Basically there are three steps to using it:

  1. Store the starting time before the first line of the program executes.
  2. Store the ending time after the last line of the program executes.
  3. Calculate the difference between ending time and starting time, which will describe the running time of the program. Output is in seconds.

The library contains a number of differing functions. For example time() returns a floating point number, so to avoid precision loss, it might be better to use time_ns(), which returns time as an integer number of nanoseconds since the epoch (1 second = 1,000,000,000 nanoseconds). Here is a simple program to time a recursive Fibonacci function:

import time

def fibonacciR(n):
    if n==1 or n==2:
        return 1
    return fibonacciR(n-1) + fibonacciR(n-2)

n = int(input("no. Fib? "))
stm = time.time_ns()
etm = time.time_ns()
eltm = (etm-stm)/1000000000
print("Runtime = ",eltm," sec")
python3 time.py
no. Fib? 39
Runtime =  8.28619  sec

Another function which could be used is process_time_ns(), which returns (in nanoseconds), the sum of the system and user CPU times, not including time elapsed during sleep.

A few words with Niklaus Wirth

NW: “Indeed, the woes of Software Engineering are not due to lack of tools, or proper management, but largely due to lack of sufficient technical competence. A good designer must rely on experience, on precise, logic thinking; and on pedantic exactness. No magic will do. In the light of all this it is particularly sad that in many informatics curricula, programming in the large is badly neglected. Design has become a non-topic. As a result, software engineering has become the El Dorado for hackers. The more chaotic a program looks, the smaller the danger that someone will take the trouble of inspecting and debunking it.”

Carlo Pescio interview with Niklaus Wirth, “A Few Words with Niklaus Wirth”, Software Development, 5(6) (1997).

Memories of Fortran

Memory management is a horrible thing. It’s one of the things novice programmers hate most about languages like C and C++. In C, poor memory management eventually leads to “memory leaks”, when you create memory that you loose track of somewhere along the way. I mean it’s not hard to do. Of course it’s also possible to use up all the memory because someone forgot to deallocate it after it has been used. Other languages like Java have “garbage collectors” that periodically clean up things (likely they should be re-termed “memory reuse managers”). C++ uses constructors and destructors (sounds like a transformer of sorts).

Fortran handles thing a little differently, i.e. it automatically deallocates any pointer to memory that goes out of scope. For example here is a piece of code that creates a “dynamic” array, x, using allocate(), which is created in heap memory.

program mem2
   integer, dimension(:), allocatable :: x
   integer :: i, n
   write(*,*) "How large an array? "
   read(*,*) n
   do i = 1,n
      x(i) = i*i
   end do
end program mem2

Notice though that x is not deallocated. In C similar code would be a disaster, but not in Fortran. Why? Because the Fortran standard requires that when an allocatable array goes out of scope it should be deallocated. You can use deallocate to explicitly manage the lifetime of a piece of memory. If in a function, the function will allocate the array as requested and then deallocate it again as soon as the function is over. Note that x can be used like a normal array once allocated. In order to initialize and array, the keyword source can be used. For example:

allocate(x(n), source=0)

Fortran 2003 also allows array arguments to a subprogram to be allocatable. For example the subroutine print_arr() shown below prints out a 1D array, and could be used in the above example. The parameter arr is an allocatable array. To make sure it is actually initialized, the function allocated() can be used.

subroutine print_arr(arr)
   integer, dimension(:), allocatable, intent(in) :: arr
   if (allocated(arr)) then
      do i = 1,size(arr)
         write(*,'(i3,x)',advance="no") x(i)
      end do
   end if
end subroutine print_arr

It is also possible to move allocations can be moved between different arrays with the move_alloc() subroutine. The subroutine shown below, resize(), resizes an allocatable array, to a new length, n. If the array arr is already allocated, it moves the contents to tmp using move_alloc()(line 9), and in the process deallocates arr. Then arr is allocated to size n (line 12), and if tmp is allocated, transfers the relevant number of elements back to arr (lines 13-16).

subroutine resize(arr, n)
   integer, dimension(:), allocatable, intent(inout) :: arr
   integer, intent(in) :: n
   integer, dimension(:), allocatable :: tmp
   integer :: arsz,nsz

   if (allocated(arr)) then
      arsz = size(arr,1)
      call move_alloc(arr,tmp)
   end if

   if (allocated(tmp)) then
      nsz = min(arsz,n)
      arr(:nsz) = tmp(:nsz)
   end if
end subroutine resize

A learnable language can be judged by the quality of its error messages

For the novice programmer, one of the features of a language that helps make it learnable is the quality of its error messages. If a novice programmer cannot decipher what an error actually is, how are they ever expected to fix it? To illustrate this, let’s look at how a series of programming languages handle simple errors. First let’s look at Fortran.

program err
   integer :: x, y
   read(*,*) x
   if (mod(x,2)) then
      print *, "even"
      print *, "odd"
end program err

When this is compiled it produces the following error message:


    8 | end program err
      |   1
Error: Expecting END IF statement at (1)
f951: Error: Unexpected end of file in 'err.f03'

This basically indicates that rather than the line of code cited on line 8, the compiler was expecting an “end if” statement. In general, Fortran provides quite good error messages. Here is a similar program in C.

#include <stdio.h>
int main(void){
   int x;
   scanf("%d", &x);
   if (x % 2 == 0)
   return 0;

When compiled it produces the following error message:

err.c: In function 'main':
err.c:8:20: error: expected ';' before 'return'
    return 0;

This too nicely locates the problem (and C error messages have improved greatly over time, at one point in time it would just have been a “syntax error”). There are still things that have low-level things flow up, for example accidentally using = instead of == in the if statement – does the novice programmer understand what an lvalue is?

err.c: In function 'main':
err.c:5:14: error: lvalue required as left operand of assignment
    if (x % 2 = 0)

There are also things that are hard for the novice programmer to understand. For example omitting the & in the scanf(), which only appears if -Wall is used when compiling:

err.c: In function 'main':
err.c:4:12: warning: format '%d' expects argument of type 'int *', but argument 2 has type 'int' [-Wformat=]
    scanf("%d", x);
           ~^   ~
err.c:4:4: warning: 'x' is used uninitialized in this function [-Wuninitialized]
    scanf("%d", x);

Now let’s try a similar program in Julia.

x = parse(Int,chomp(readline()))
if (x % 2 = 0)

Now the error here is the use of = instead of ==. Here is the corresponding error message:

ERROR: LoadError: syntax: "2" is not a valid function argument name around err.jl:2
 [1] top-level scope at errs/err.jl:2
 [2] include(::Function, ::Module, ::String) at ./Base.jl:380
 [3] include(::Module, ::String) at ./Base.jl:368
 [4] exec_options(::Base.JLOptions) at ./client.jl:296
 [5] _start() at ./client.jl:506
in expression starting at err.jl:2

The error messages produced by Julia is what makes it less learnable for the novice programmer. Python does a better job of it:

x = int(input("value?"))
if x % 2 = 0:

Which produces the following error message, which is good, but could be better:

  File "err.py", line 2
    if x % 2 = 0:
SyntaxError: invalid syntax

The problem is that ultimately programming languages are designed by people with years of programming experience, and many of them fail to perceive the language from the perspective of the novice programmer. Poor error message usability will ultimately push some people away from a particular language. It is no different from washing machine interfaces designed by engineers who have never used a washing machine in their lives.

The art of googling

As I have mentioned before, one of the biggest problems we face today is the inability of people to problem solve. Solving problems in part involves exploring potential solutions, and doing research. When I was in university this meant combing the library stacks for textbooks, or perhaps the odd journal article. The “net” existed but was limited to email, news groups, and ftp servers. Now we have a huge repository of information (not always useful, but for programming there is no better source) – the Internet – and to search it we have Google (well there are others, but are they truly worthy?).

The problem is that some students don’t seem to understand how to really use Google. Sometimes it’s just a simple question, or an error message from a compiler, that could have been easily searched for. It’s not that I mind finding the answer, it’s more so that the person could not be bothered even trying to look it up for themselves. Sometimes it’s simple stuff like how to install a compiler, other times there is an compiler error in some code they have written and they can’t be bothered googling the error message (or possibly checking the code to find the missing “[“).

Sometimes I don’t blame people for asking, mainly because they probably have not been taught to google properly. For example in old Fortran there is a function called ALOG(). One might presume that it calculates an anti-log, but in reality it is an archaic form of log(). But it is easy to google to get the right context (it’s like the first thing in the search results).

As another example, consider the problem of implementing Quicksort in Cobol. Now Cobol does not support recursion, so any implementation will have to use stacks. A quick google with the phrase “Quicksort cobol”, will present recent (2019) blog post, Quicksort in COBOL on z/OS, and a link to a paper in a German journal from 1989, “Implementation of Quicksort in COBOL” by Knut Hildebrand. From this it is possible to google the author and find the German title of the paper – “Implementierung des Quicksort in COBOL”. The German title provides more scope, and allows us to search again using the search string “Implementierung quicksort cobol”, and changing the search language to German. This provides a version described on a French blog. An alternative might be to search for “nonrecursive quicksort”, or iterative quicksort”, which typically produce algorithms which use a stack. Or perhaps you don’t like stacks, and search for “quicksort no stack”, which provides a link to a paper from 1986, “Quicksort without a stack“.

What I mean here is that there are many options, and one has to look beyond the first page produced by Google, and learn to perhaps submit better queries. Google in many respects is one of the better methods of providing some of the answers when trying to decipher the solution to a problem. It’s by no means perfect – the Internet does not have all the answers. Sometimes you have to dig deeper, for example using another information repository such as archive.org.

Fortran – the greatest thing (for arrays) since bread came sliced

Many people likely find the idea of programming in Fortran offensive. It’s from a bygone era. Modern languages are cooler. But are they? I have written many a blog post on why Fortran is a better language for novice programmers. Better than C anyway. I know some people will disagree, but frankly unless you have spent 20 years trying to teach C as an introductory programming language, you have no real context. Even for certain tasks, Fortran may be a better choice, and by this I mean crunching numbers.

Fortran has an easy-to-use array syntax, and that’s not surprising considering it was developed within a culture of high-performance computing – crunching numbers. Nobody will really ever build as OS, or indeed a compiler using Fortran, for its forte is numbers (and truly, C was designed as a systems language). This is apparent in the fact that most algorithms implemented in Fortran aren’t much slower than those implement in C, and in some cases they are actually faster.

C sucks at dealing with arrays, at least in a less-than-complicated way. For example when you teach someone about how to implement an algorithm to process images, you don’t really want to have to dive deeply into the processes of memory management. The good thing about Fortran is that the system handles all the dirty work associated with memory allocation, and so the focus can be on the algorithm. Let’s face it, Fortran introduced the original array syntax. The intuitive use of arrays in Fortran is one of the reasons that it is still heavily used in the physics community. Consider the following array operations in Fortran:

integer, dimension(1000) :: a,b,c
integer :: s
a = b
a = 1.73*b
c = a * b
c = matmul(a,b)
s = sum(c)

Here’s what each line of code does:

  1. Array b is copied to array a.
  2. Array b is multiplied by the scalar 1.73, and the result copied to array a.
  3. The element-by-element multiplication of arrays a and b (assuming they are the same size).
  4. The matrix multiplication of arrays a and b.
  5. Sum the values in array c, and assign the value calculated to s.

Almost all of the intrinsic math functions in Fortran can take arrays as input. To perform many of these tasks in C requires cycling through all the elements of the array using a for loop.

In addition, Fortran arrays can be indexed using any starting value, making an the language conform to the algorithm, rather than the other way around (and if you really like arrays that start at zero, you can use them). For anyone looking for a good example of an algorithm that uses negative array indices, look no further than Niklaus Wirth’s classic 8-Queens problem [1]. For a multidimensional array in C, one has to use the syntax x[i][j], whereas in Fortran it is simply x(i,j). Here are some other cool features of arrays:

integer, dimension(1000) :: x, z
integer, dimension(40) :: y
x = (/ (i, i = 1,1000) /)
y = x(1:1000:25)
z = 4
z(300:339) = y

Here’s what each line of code does:

  1. The array x is initialized using an implicit do loop, called an array constructor.
  2. The array y is created from every 25th element in array x.
  3. The array z is initialized to the value 4 in every element.
  4. The array y is copied into array z, in elements 300 to 339.

It is also possible to selectively process an array using the where-elsewhere construct. For example in the code below, only the values of the array x that are not 0 are processed by the log() function, all zero values are set to 0.1 in the output array logx.

real, dimension(10) :: x, logx
x = (/ (i, i = 1,10) /)
x(5) = 0.0
where (x .ne. 0.0)
   logx = log(x)
   logx = 0.1
end where

Calling Fortran old is like calling C old. Yes, there are new languages like Julia which handle arrays nicely too (Python does not even have real arrays you need to add numpy), but Julia has become a somewhat bloated language with no backwards compatibility (and it still suffers from compiler messages that *really* suck).

Fortran excels at processing arrays, just as C excels at being a systems language, Ada excels at being a real-time language, and Cobol excels at being a data processing language. Each does their own thing, and there is no need for one to supplant another. There will always be new languages, but time has shown that sometimes sticking with a known entity for a particular task is the most optimal approach.

  1. Wirth, N., Algorithms + Data Structures = Programs, Prentice-Hall (1976).


I am made by my times
I am a creation of now
Shaken with the cracks and crevices
I’m not giving up easy
I will not fold
I don’t have much
But what I have is gold

Blue, R.E.M.