Beyond the edge…

Beyond the edge of the world there’s a space where emptiness and substance neatly overlap, where past and future form a continuous, endless loop. And, hovering about, there are signs no one has ever read, chords no one has ever heard.

Haruki Murakami

Life? It’s all around us.

People tend to worry about the rise of machines. We should be, because they likely will be smarter than us. The synths in Star Trek Picard, are testament to what we will eventually achieve, someday (although watch the series, and they don’t all exactly seem “cognizant”, more zombie-like). Not any day soon though. We will no doubt one day be able to create human-like synthetic beings, whether they will be truly cognizant is another thing altogether. For synths to be considered “lifeforms”, as opposed to the robots that work in factories doing defined task, they would require consciousness.

Consciousness is described as having a unique sense of self, coupled with an awareness of what is going on around you. Another word sometimes used is sentience, which is the capacity to feel, perceive or experience subjectively. There isn’t much difference between the two, although I’m sure the mere thought provokes vivid discussions amongst philosophers. Of course there are plenty of sentient beings on our own planet – animals are sentient beings. Read The Secret Lives of Cows, and you will gain a more thorough understanding. Cows are intelligent, as are dolphins. Even ants are smart (although they have more of a collective “borg”-type way of thinking). If they weren’t smart, we wouldn’t be studying them to design better algorithms. So we have intelligent life around us, we are just too ignorant to understand it, or accept it.

Adding sentience to a human-like synth is another thing altogether. Evolved AI systems use “deep learning” to solve problems quickly, enabling AI to teach itself to win strategy games, or identify things. But doing all this relies on a programmer to design an algorithm, and providing data for it to learn from. Sentience would mean a synth could make those initial choices itself, essentially doing their own thing. That isn’t exactly easy. Sure they could write their own code, but designing an algorithm for something new? Thinking outside the box? Maybe someday. It is of course possible that such lifeforms already exist in the wide spaces of the universe. Maybe there is even sentient life in non-organic forms. We just don’t know.

There is also the possibility that we are synths – the result of some cosmic experiment evolved over megaannum?

To infinity and beyond

What is infinity? If you were to walk inside the Apple HQ in California, you would be walking what is essentially an infinite loop – it goes on and on, forever, or at least the “walk” could appear infinite, the paths in the circular building are not. How long could a human walk in the infinite loop? Likely not infinity. It is hard for humans to perceive the idea of infinity, except to say that it is something that goes on forever. Is the universe infinite? Does the cosmos go on forever? We’ll we don’t really know, as the farthest reaches of deep space that we can see doesn’t have edges. Then of course there is the whole “Big Bang” thing – is the universe was created during the Big Bang, what was there before? Is our entire universe contained in some cosmic shoebox? Is there some enlightened being that collects universes? We may never know.

infinity

The other thing that seems infinite is numbers. We understand they can go on, and on, and on. Then in programming we have infinite loops. These, like walks in circular buildings are not exactly infinite. In theory, one could have a loop that runs forever, presuming that the machine its running on would last forever. But who does that? No one willingly lets a loop run forever, and technically it is quite bad form to code a loop as being infinite. Recursion can also become infinite, but this is tempered by the fact that the use of resources will eventually cause *something* to crash (either the program, or the system). Sometimes it happens by sheer accident, usually because a loop has been set up improperly.

Here is an quintessential (explicit) infinite loop in C:

#include <stdio.h>
int main(void)
{
   while (1)
      printf("0");
   return 0;
}

This would *theoretically* go on forever, because it’s not consuming any memory resources. All it does is continuously print a “0” to the screen, not exactly rocket science. However it *does* eat up system resources. On my Mac, it ate up 30% of CPU time. Not exactly great.  This Fortran version of an infinite loop chokes 70% of the CPU, and Fortran actually provides a generic loop structure that defaults to an infinite loop.

program infinity
   integer :: i
   i = 0
   do
      write (*,*) i
   end do
end program infinity

In my days as an undergrad, people would use these on the Unix systems to push up the load so it would push the faculty playing Larn and Rogue off the system. Many infinite loops are caused by problems associated with numeric calculations, particularly those involving floating point numbers. Here’s a case in point in Fortran:

program infinity
   real :: r, s
   r = 0.01
   s = 1.0
   do while (r /= s)
      write (*,*) r
      r = r + 0.01
   end do
end program infinity

This will go on forever (or at least maybe until the limits of real are reached). Why? It seems pretty clear cut… run the loop until the value of r equals that of s. If r starts at 0.01, and needs to make it to 1.0, that means 100 iterations. But according to previous posts on the subject, 0.01 cannot be stored accurately in binary – the first output is already 0.00999999, so you know things aren’t going to proceed in a nice manner. By the time it should equal 1.0, it actually equals 0.999999344, and so the next value is 1.00999939, and so it bypasses 1.0 altogether and keeps running. Of course this problem could be fixed by (i) replacing /= with <=, or even better (ii) using an integer value for r and s, i.e. r=1, s =100, r=r+1.

In reality, this type of infinity is the only one we can control. The greater questions of how far the cosmos extends, and who is out there may or may not ever be answered. There have to be some mysteries right?

 

 

Recursion (versus iteration)

In order to represent an algorithm recursively (as opposed to iteratively), three questions must be posed:

  1. Can the algorithm be written recursively?
  2. Is recursion the most organic approach?, i.e. Will the recursive representation be easy to write, understand and test?
  3. Does the recursive representation require an “acceptable” amount of memory, and time to execute?

Some problems lend themselves naturally to a recursive solution. For example Factorial and Fibonacci are good examples. Recursion is an appropriate approach if a problem can easily be divided into similar sub-problems, which can be solved in a similar manner. Algorithms which are divide-and-conquer are optimal. Solving a sub-problem helps to contribute towards a solution to the original problem.

A good example is a car-parking algorithm. Given a parallel car park 100 feet long, how many cars can one park in a random fashion? A car is 10 feet in length, and can be parked with no space in between cars (let’s assume some sort of awesome self-parking car). So when the first car is randomly parked, one now has two sub-problems: the task of parking a car in the space behind the parked car, and the task of parking a car in the front of the parked car. These are now the two sub-problems, and are illustrated below. In each of the two sub-problems, a car can be parked, creating two new sub-problems etc.

parkedCarsL2

Some problems require values be stored during their solution. Examples are algorithms like Eight Queens or maze searching. These are good problems for recursion because values can be saved in a recursive routines local variables, and can be retrieved when sub-problems are finished recursing. For example, a maze search algorithm might save the current position, and try one of the four directions in a clock-wise manner. In the example below, the algorithm tries to find a path through the maze. In the first maze, the algorithm looks N (then E, then S), and finds a path N. In the second maze, the algorithm looks W (then N, then E), and finds no paths, except the one which was just taken, signifying a dead end. The algorithm then moves back a position (maze 3), and tries the next direction, E (nothing), S (path exists), etc.

maze-search

Such a process could easily be simulated by an iterative algorithm, but it would require a stack of sorts to maintain positions. Some algorithms are easier to represent in a recursive manner, but may be more difficult to understand. A case in point is the Towers of Hanoi (TOH), where the iterative solution is quite long, and the recursive solution is simple, yet actually harder for someone to comprehend. Here’s a recursive version of TOH in C:

void toh(int N, int from, int to, int using)
{
    if (N > 0) {
        toh(N-1, from, using, to);
        printf("move %d -> %d\n", from, to);
        toh(N-1, using, to, from);
    }
}

One thing to seriously consider when looking at a recursive algorithm is resources. Every time a recursive function is called, an activation record is created, and memory  is allocated. An algorithm which makes numerous recursive calls will expend time in the overhead associated with each call, and memory – sometimes it may attempt to use more memory than is available. An iterative algorithm will typically prove more efficient than its recursive counterpart. A case in point is Fibonacci, which seems like an optimal problem for a recursive solution, and it is – to a point. For very small values, the recursive Fibonacci consumes time and memory in a fashion similar to its iterative brethren. Ramp up the number of Fibonacci numbers you want to calculate, and the recursive algorithm bogs down (as shown in a previous post).

 

 

 

Unix and C living in symbiosis

Although Unix was initially built using assembler on a PDP-7, C was built in part to provide a systems language. Unix was re-written in C a short time after. For the first ten years of Unix and C, they lived in a somewhat symbiotic existence. Improvements to C allowed for enhancements to the OS, and utilities, perhaps in areas such as efficiency. An evolving OS required a robust programming language. Even commands we take for granted, like getchar(), which can be used to create things like file copy programs, have very humble beginnings. Here is an unbuffered version of getchar(), which reads the standard input one character at a time:

int getcharZ(void)
{
   char c;
   return (read(0, &c, 1) == 1) ? (unsigned char) c : EOF;
}

This can be put in a program, to create the Unix system utility echo:

int main(void)
{
   char c;
   c = getcharZ();
   while (c != EOF) {
      putchar(c);
      c = getcharZ();
   }
   return 0;
}

Everything in Unix in some manner or another is based on C.

 

 

A cat by any other name?

One of the first Unix utilities I learned about was cat. cat is the simplest of all printing commands. Here is a sample:

$ cat tobe
To be, or not to be, 
$ cat question
that is the question.
$ cat tobe question
To be, or not to be, that is the question.

Notice when two files are given as arguments the output is concatenated together, hence the name “cat“. Many of these Unix commands work in exactly the same way as they did 50 years ago. Talk about legacy software, these utilities in reality have not changed much since that time.

Ken Thompson and Dennis Ritchie began writing Unix in 1969 on a PDP-7. This was before C of course, so the early Unix software was written in assembler (you can find assembler versions of Unix commands here). Therefore the first version of cat was written by Ken Thompson using assembler. The next version of cat was also written in assembler, when they upgraded to a PDP-11. It was not until 1973 that much of Unix was rewritten in C, however cat did not appear in C until later. Since then it has evolved, but mostly in the context of its code, as its functionality has remained roughly the same.

The original cat was very basic. From the 1971 man page: “c_a_t_ reads each file in sequence and writes it on the standard output stream.” Nothing more, nothing less. There were no flags to do fancy things. In its bare bones form, written in C, cat may look something like this:

#include <unistd.h>
#define size 512

int main(void)
{
   char buf[size];
   int n;

   while ((n = read(0, buf, sizeof buf)) > 0)
      write(1, buf, n);
   return 0;
}

Note that this is run as “cat < file“. If the file size is not a multiple of size, then read will return a smaller number of bytes, to be written by write, and then the next call to read will return 0. Both read and write are low level I/O.

By the 7th edition of Unix, they had added a single optional argument, −u. Pike and Kernighan, in their paper “Program design in the UNIX environment“, suggest this was a “less desirable” change, which soon spurned more embellishments to cat, e.g. in the Berkeley distribution of Unix, or System V. These options allowed for numbering the output lines (−v), or printing non-visible characters (−n). Pike and Kernighan suggest that “none of these options are appropriate additions to cat“. Their comments get the the kernel of Unix design:

A UNIX program should do one thing well, and leave unrelated tasks to other programs. cat’s job is to collect the data in files. Programs that collect data shouldn’t change the data; cat therefore shouldn’t transform its input.

Basically do one thing, and do it right.

Check here for The Source History of Cat. The article is a good source of the history of cat, and its code base.

 

Why calculating Fibonacci using binary recursion is just plain moronic

The Fibonacci sequence is a series of numbers of the form:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34,…

It is derived such that each number is the sum of the two preceding ones, starting from F(0)=0 and F(1)=1. Or in equation form: F(n) = F(n-1) + F(n-2).

The original formula for the Fibonacci series lends itself to a natural, if somewhat naïve, example of binary recursion, which is probably the most notorious implementation found in the literature. The algorithm works by returning one if n = 1 or 2, and the sum of  f(n-1)  and f(n-2) if n>2 . Represented as a recursive function in C, this looks like:

int fib_binary(int n) {
   if (n == 0)
      return 0;
   else if (n == 1)
      return 1;
   else
      return fib_binary(n-1) + fib_binary(n-2);
}

This algorithm certainly generates the correct answer, but what is its computational cost, i.e. how long does it take?  If n is less than two, the function halts almost immediately. However for larger values of n, there are two recursive calls of the algorithm. This implies that the running time of the algorithm grows at exponential time, or f(n)=2^0.694n, which for n=200 is 2^140 operations. 

To understand how this is possible, consider the Fibonacci call tree for calculating the 5th Fibonacci number, which just happens to be 5. Notice how many times F₁ is called?

fibonacci_tree

The biggest problem with using binary recursion to calculate the Fibonacci numbers is the time spent re-calculating already calculated Fibonacci numbers. For example, when calculating f(40) using recursion, f(39) is calculated once,  f(35) is calculated 8 times, f(1) is calculated 102,334,155 times, and  f(0) is calculated 63,245,986 times. All up there is a total of 331,160,281 function calls. The calculation of f(40) actually takes an average of 900 milliseconds (on my MacBook Pro with a 2.9 GHz Intel Core i5).

This is an interesting analysis, rarely made in textbooks. Few textbooks discuss alternatives to binary recursion for Fibonacci. Indeed, the use of Fibonacci numbers to illustrate binary recursion is a good example of when not to use recursion.

For anyone interested in Fibonacci numbers, there is a whole journal available online, The Fibonacci Quarterly.

Coding Cobol: Recursion

You should never, ever write recursive Cobol programs. But recursion is now possible in most Cobol compilers. The following code shows a Cobol program to calculate Fibonacci. A recursive program has two parts: a main program and a “recursive” program (both in the same file). Here is the main program:

identification division.
program-id. fibrecurse.

data division.
working-storage section.
01 n   pic x(4) comp-x.
01 fib pic x(4) comp-x.
01 m   pic x(4) comp-x.

local-storage section.

procedure division.
   display "Nth Fibonacci number? "
   accept n
   call "fibonacci" using by value n returning fib
   display n "th fibonacci = " fib
   goback.
end program fibrecurse.

All the main program does is obtain the user input for the nth Fibonacci number to calculate, and call fibonacci, the recursive “subprogram”. Now onto that subprogram.

identification division.
program-id. fibonacci is recursive.

data division.
local-storage section.
01 result1 usage is binary-long.
01 result2 usage is binary-long.

linkage section.
01 num usage is binary-long.

procedure division using by value num.

   if num equal zero
      move 0 to return-code
      goback
   end-if
   if num = 1
      move 1 to return-code
      goback
   end-if

   compute num = num - 1
   call "fibonacci" using by value num returning result1
   compute num = num - 1
   call "fibonacci" using by value num returning result2
   compute return-code = result1 + result2
   goback.

end program fibonacci.

The linkage section in the “subprogram” associates the variable num, with the variable n in the main program.

 

 

Unix – The most powerful OS?

Unix was the first OS I learned (mid 1980s), unless you count TOPS-20 (which ran on a PDP-10, running DECSYSTEM-20), which was awful. Later there was DOS, which also paled in comparison. Unix allowed you to learn and do things others just didn’t. It does some powerful things, and you quickly learned how to program in a cautionary manner so as to not break things. it didn’t take long to learn that if you deleted a file it would be gone – forever.

Consider grep, which searches files for patterns, and exists in numerous versions – grep, egrep, fgrep, zgrep, etc. egrep interprets regular expressions, the same as grep,  but with an “or” operator and parenthesis to grep expressions. So grep can be used to do some crazy pattern searching. Consider the following example which finds all words of six or more letters, that have the letters in alphabetical order. First the file monotonic:

^a?b?c?d?e?f?g?h?i?j?k?l?m?n?o?p?q?r?s?t?u?v?w?x?y?z?$

Now process using egrep:

egrep -f monotonic /usr/share/dict/words | grep '......'

Here is the output:

abdest     acknow     adipsy     agnosy     almost
befist     behint     beknow     bijoux     biopsy
chintz     dehors     dehort     deinos     dimpsy
egilops    ghosty

There are also a lot of utilities which can be used to create cool things. Imagine creating a script to count the distribution of words in a text file. Here is an awk script called wordfreq:

awk ' { for (i=1; i<=NF; i++) num[$i]++ }
END   { for (word in num) print word, num[word] }
' $*

The first for loop looks at each word in the input line, incrementing the element of array num subscripted by the word. After the file has been read, the second for loop prints, in arbitrary order, the words and their counts. This script can be used in the following manner:

./wordfreq emily.txt | sort | column

This runs the awk script on the text file emily.txt, pipes that into sort, and then pipes that result into column, which prints the output in columns. Here is the result:

wordfreq

If you want to learn Unix from the bottom up, pick up a copy of “The UNIX Programming Environment”, by Kernighan and Pike (1983)… you can pick up a cheap copy from Abebooks.

* The file emily.txt contains the poem “It sifts from leaden sieves” by Emily Dickinson.

Programming made easy – how to learn the basics of any language

In the 1970s and 80s there were a bunch of programming books that taught programming by introducing programming language concepts and illustrating them in 3-5 different languages. Usually the languages were old school: Fortran, Algol, BASIC, maybe Cobol. The idea is that once you understand a structure it should be easy to replicate it in any language. Doing this means that you effectively learn a number of languages at once. Now it may be hard to envision this because we are so tied to learning one language at a time, but it is possible, and over time it becomes easier to remember the difference between the syntax of differing languages. You can obviously start with data, because any discussion of programming languages should start with data.

An integer is a data type associated with whole numbers, e.g 5, 13, 99, 19763758. A number with a decimal point in it, i.e. not a whole number is called a real or a floating-point number. Once you know those two things, you know the basics of data – in any language. Now to use data in a program you have to create a container to hold the data in memory, this is most often called a variable. In most programming languages variables have a name (the identifier).  and a data type associated with it. For example, in Fortran to create an integer variable named  thor, is as simple as:

integer :: thor

The variable thor is now an integer variable, which incidentally has no value yet, because we haven’t given it a value. How is this done in other languages? Here is the Ada version:

thor : integer;

Or maybe the C version (which shortens the datatype name to int):

int thor;

Now values can be assigned to them:

Fortran →  thor = 27
Ada     →  thor := 27;
C       →  thor = 27;

The syntax is a little different in each language, but the concept is the same – the variable thor is given the value 27 to store.

In more “dynamic” languages such as Python and Julia, variables don’t need to be pre-declared. A datatype is associated with a name of a variable when a value is assigned to it. For example (in Python or Julia):

thor = 27

The variable gets the value 27, and is deemed an “integer”. If the variable is given a new value, say 64.7, then it is deemed a “float”. Next we’ll look at some basic programming. In one fell swoop we have learned how to create integers in five languages. The syntax can be trivial, if you have an understanding of the concept that underpins it.