Fibonacci and pineapples

The Fibonacci sequence is found in the strangest of places. Take pineapples for example. The number of spiral rows of fruitlets (eyes) in pineapples was study as early as 1933 in an article by Linford [1] published in Pineapple Quarterly, however no reference was made to Fibonacci numbers. In a follow-up study by Onderdonk [2] in 1970 it was found that the majority of pineapples had 8-13-21 rows of fruitlets, with a few smaller ones at 5-8-13. It was suggested that a pineapple with more fruitlets for a given size would have a finer texture, and it was hoped that a pineapple with 13-21-34 rows could be found, however Onderdonk never found any pineapples exhibiting such a pattern. In general, pineapples have three series of spirals, derived from the roughly hexagonal pattern of its fruitlets, or scales.

Here is an example of the hexagonal scale patterns found on a pineapple.

Further reading:

  1. Linford, M.B., “Fruit quality studies II. Eye number and eye weight”, Pineapple Quarterly, 3, pp.185-188 (1933)
  2. Onderdonk, P.B., “Pineapples and Fibonacci numbers”, The Fibonacci Quarterly, 8(5), pp.507-508 (1970)

The new norm in post-secondary education?

Here we are almost mid-way through 2020, and education is going through somewhat of a renaissance. Not one that was planned per se, because to be honest, nothing much is really planned in post-secondary education, more so reacted to. It is likely the same way at institutions across the globe. The age of remote learning is upon us, and it may change the way education is delivered. Institutions of learning have been doing things the same way for decades, ever since numbers of students going to university increased in the 1970s (the number of people who went on to university was around 5% in the 1960s, rising to 15% in the 1970s, and 40-odd percent today). Nothing much has changed, except there is likely more continuous assessment, and class sizes have ballooned.

Enter a pandemic, and a re-think of how courses are delivered, in a reactionary sort of way. All of a sudden people, some of whom mocked distance education, are forced to teach remotely, and think of new ways of engaging students. And it’s hard. Lecturing is much easier, walk into a classroom, spout a bunch of material, answer a few questions. Leave. Maybe the instructor posts their notes, maybe they don’t. With remote learning the instructor has to provide much more in the way of cohesive material, or even video material. Creating cohesive material takes time, and effort, and running an online course is much more than just lecturing to a class.

There are inherent downsides. Some students likely fall through the cracks, but anyone who thinks this doesn’t happen in a normal classroom is kidding themselves. Teaching a 20-student seminar is real teaching, teaching 300 students in an auditorium? Hardly. There are a percentage of students who equate online with easier, and wake up one morning to realize this is not always the case. Remote learning is just different.

Many I’m sure are hoping things will go back to the norm, and quickly. But likely they won’t. The right decision is to continue remote learning into the fall semester, or risk a second wave of discontinuous learning. There are opportunities here as well. An opportunity to create a learning environment that is much more than large auditoriums, perhaps an opportunity to create a learning community. Maybe we could even help nurture a learning environment for students, both domestically, and internationally for students who aren’t able to attend a residential university. The world has changed, and maybe, just maybe, institutions of higher learning should take this as sign that they too should change.

Why novice programmers end up hating programming

I recently experienced an introductory programming course through the eyes of a first year student at another institution. The course was in Java and based around object-oriented concepts, so much so that the course’s title included the words “object oriented”. It honestly did not seem like a course I would advocate any novice programmer to undertake, and it solidifies the notion of why I think a lot of students taking introductory programming classes end up hating programming (and I mean non computer science students). I say this from an understanding of having taught first year programming for 15 years to a mixed class of CS, science, and engineering students (in C no less).

Only one phrase is needed – programming complexity. The class in question seems to want to do everything under the sun, except actually teach the fundamental concepts associated with programming. Then they commit two of what I think are the greatest mortal sins – adding OO, and using Java (oh, and hardware too). Let’s start with OO. For people who have never programmed before, OO is not something that should be taught. There, I said it. I know a good number of institutions advocate for “OO-first” and the arguments for and against it are almost as extensive as those relating to the choice of introductory programming language. But here’s the thing, OO is just a method of encapsulating the solution to a problem. If a novice programmer does not understand the fundamentals of problem solving, algorithm design, basic programming constructs and testing, then what’s the point of OO. Problems can much more easily be solved without it, *especially* in an introductory course. Why would anyone want to throw the likes of classes, objects, inheritance, abstraction, message passing, encapsulation, information hiding and polymorphism (to name but a few), at students who have *never* programmed before.

It is these far-flung concepts, and often mediocre, poorly chosen examples that lead many students to hate programming. When you introduce the basics of programming, OO just isn’t warranted. I know there are a lot of “objects first” advocates out there. The problem is that students get so absorbed into learning concepts related to OO, they never learn the fundamentals of programming. Then, there is Java. Java is good for somethings I guess, but it is not a language I would advocate for as an introductory language, or frankly to build any large system in. Why? There are better languages out there. Would I use Java to build an avionics system, or perform parallel computations? Certainly not.

But for novices programmers, a poor choice of language can have long-lasting effects (although few long-term studies have been done on this, because computer science pedagogy is funded by very few). Java adds complexity to introductory programming, in the same vein that C is not an optimal introductory programming language. Here’s an example of “Hello world!” in Java:

public class helloworld {
   public static void main(String[] args) {
      System.out.println("Hello world!");
   }
}

A lot of effort to print out two words. Oh, but it gets better. In order to make the program do something, it has to first be compiled, using javac to produce a .class file, which can then be executed using java.

javac helloworld.java
java helloworld

And here lies the problem. Novice programmers spend so much of their time dealing with the intricacies of the programming language that they don’t actually learn about the concepts underpinning programming. What does it mean? Well let’s see how it could be explained:

  1. The first line of code indicates the name of the class, which is helloworld (long pause to explain what a class is). The class uses an access specifier public, which indicates that the class is accessible to other classes from other packages.
  2. The second line of code indicates the name of one method in helloworld, which is the main method. The main method is the starting point of a Java program. A long effort could be spent explaining what each part of this statement means.
  3. The third line of code, uses the function System.out.println(), to print the text enclosed by quotations onto the screen.

Yikes! Likely a lot of “don’t worry about this, we’ll cover it later”, largely because of the presence of OO. In this case even C is a better language (partially because it is not OO, and hence less verbose). Here’s the same algorithm in C.

#include <stdio.h>
int main(void) {
   printf("Hello world!\n");
   return 0;
}

But even here there are nonsensical terms (for the novice programming) like void, and even “return 0”. Printing “Hello world!” shouldn’t be this hard, and in some languages it isn’t. The overhead of languages like Java just adds to the cognitive load of learning to program. Here is the Python version:

print("Hello world!")

Here is the Julia version:

println("Hello world!") 

Here is the Pascal version::

program Hello;
begin
   writeln ('Hello world!');
end.

All are easy to understand, simple, and don’t involve OO. It is the complexity of languages, and abstract concepts taught that makes many novice programmers nauseous. A bad initial experience can lead students to avoid programming like the plague, which is especially problematic if they are working in fields that could benefit from the knowledge base programming provides.

The bottom line is that some methods of teaching programming in post-secondary institutions are not exactly optimal. Languages may be chosen because of faculty likes and dislikes, and not what is optimal for a learning environment. C was never meant to be an instructional language, and certainly not for non-CS students. Fortran and Cobol were discarded from teaching because they were “old”, but are today still amongst the most highly used programming languages. Notwithstanding the instructor who teaches a Java class in the guise of “OO” – students never learn OO, they learn the OO syntax of Java. There are many reasons why novice programmers end up disliking programming, but language choice, and complexity of instruction do tend to sit atop the stack.

Are arrays in C pointers?

Arrays in C can get confusing, and it is largely to do with pointers. An array is just a block of sequential data. Consider the C program below, which creates an array of 4 integers, i.e. a contiguous block of memory large enough to hold four int values is allocated.

#include <stdio.h>
int main(void){
   int i, x[4];
   for (i=0; i<4; i=i+1)
      printf("&x[%d] = %p\n", i, &x[i]);
   printf("Address of array x is %p\n", x);
   return 0;
}

Here is the output:

&x[0] = 0x7fff522989e0
&x[1] = 0x7fff522989e4
&x[2] = 0x7fff522989e8
&x[3] = 0x7fff522989ec
Address of array x is 0x7fff522989e0

The code prints out the address of each element in the array x, which in most circumstances is stored on the stack. You can see the difference of 4 bytes between two elements of the array x. Also note that the address of the variable name is the same as the first element of the array, i.e. x is equivalent to &x[0]. This is because x points to the first element of the array. This also means that x[0] is equivalent to *x.

But pointers and arrays are not the same thing. Arrays are not pointers. Consider the following declarations:

char a[] = "Mull";
char *b = "Skye";

The first declaration declares space for 5 characters, and associates that space with the variable “a“. The second declaration requests a piece of memory which holds a pointer, which is known by the name “b“, and points to any char, or contiguous arrays of char’s. Here’s what this looks like visually:

In many respects pointer arithmetic and array indexing are equivalent. For example:

printf("%c %c\n", a[1], b[1]);
printf("%c %c\n", *(a+1), *(b+1));

The indexing works on both arrays and pointers. Similarly, pointer arithmetic works on both arrays and pointers.

How these work is different. For example, if a[1] is used in an expression, it works by starting at location “a”, and adding 1 to it to return the character “u”. If b[1] is used in an expression, it starts by fetching the pointer value at b, adding 1 to the pointer, and returning the character pointed to, in this case “k”.

From a practical viewpoint, arrays are automatically allocated space, which cannot be resized, or relocated. A pointer on the other hand, must be explicitly assigned to allocated space (e.g. malloc), but can be reassigned. Arrays are usually allocated on the stack, whereas while a pointer variable is allocated on the stack, the memory it references is allocated elsewhere, usually the heap.

Time travelling with the post

Sometimes it is the simple things we just can’t get right with software. When you order things these days, one of the beneficial things is being able to track a package. The problem is that this software is often quite buggy. Not in the crash sense of things, but rather in the information it provides. In many cases the package arrives before the web app even realizes it’s on the truck. That’s really just annoying. Then there is the “pick-up” status, that tells you it hasn’t been picked up yet. I once had a package sit on this status for 3-4 days, then it spent 3 weeks in Saskatoon. It took nearly 4 weeks to get a package from Saskatchewan to Toronto. Of course there is one bug that really irks me, and that is the “time-travel” bug. This is when you wait all day for a package to arrive, only to have the “In transit” status change at midnight, while the Expected delivery remains the same. Example below for something I’m waiting on now.

It’s May 14th, and through some sort of magic, the package will be delivered yesterday. Now if I could only go back in time to pick it up. How easy would it be to just update the “Expected delivery” date – not exactly rocket science from a code perspective.

if (inTransit > expectedDelivery)
   expectedDelivery := inTransit

Just saying, testing might be a good thing.

PS. The package finally arrived nearly 4 weeks after being sent, from less than 10km away.

Why technology is a thin facade

Many people think that technology will be the saviour of humankind. It is likely not the first time that we have perceived shiny new things this way. Steam engines were likely the same way, as was industrial agriculture, plastics, and atomic energy. They were all suppose to lead us to better things. Of course “better” is a relative term. You can often have improvements in lifestyle, if you ignore the “side-effects” of those improvements. Electric cars are a good idea, if you ignore the fact that they are full of batteries which will have to be “recycled” at the end of their lifespan. Plastic is an indestructible building material if you ignore the fact that is takes 500-1000 years for plastic to decompose. Humans seem to have a short memory, and tend to ignore those things. Remember that wonder-product asbestos? It was used for lining refrigerators, roof tiles, floor tiles and insulation.

There are of course many things that technology helps with. The internet is kind-of useful. It allows me to do “digital foraging”, i.e. let’s me buy food online and have it delivered during the great isolation. I can buy almost anything online from anywhere in the world, which is kind-of neat. I almost don’t really need to leave my home. But the downside is that technology is always new, and old technology becomes “landfill”. How many iterations of digital cameras sit in my cupboard? Many time you can’t buy batteries for them anymore, so they are defunct, mere icons of technology past. At least analog cameras can still be used. There are pros and cons to everything.

It takes something like a pandemic to realize that much of this modern technological world is somewhat of a facade. It reminds me of backlot film sets, where everything looks genuine from the street, but behind the scenes, the buildings facades were just being held up with scafolding. Everything works because it relies on a specific way of doing things, take that away and society starts to fall, like a house of cards.

The reality is, for all our reliance on technology, we have forsaken our basics needs – food, water, shelter. You can’t eat an iPhone. Yes, Netflix and its brethren do provide some interesting entertainment, but if it wasn’t there there would be other things. Books, hobbies, cooking. Technology is good to a point, but beyond that we have to rely on our own ingenuity, and the community around us. Digital foraging is exceptional, but it requires that a person exist on the other end of the technology, be it a store, or a farm. There is a place for technology in our lives, but we must endeavour to use it sparingly and look deeper into our existence.

Explaining pointers – the basics

Some variables are created statically (on the stack in languages like C), and others are created dynamically. These dynamic variables are generally created in the heap. The generation of a dynamic variable introduces a pointer value, which is essentially the storage address of the newly allocated variable. For the purposes of a simple explanation, we will use Pascal to illustrate the concept of pointers. The compiler is FPC (Free Pascal Compiler).

Here is a simple example:

1   program pointer;
2   var iptr : ^integer;
3   begin
4      new(iptr);
5      iptr^ := 7;
6      writeln('the value of iptr is ', iptr^);
7      dispose(iptr); 
8   end.

In line 2, we create an integer pointer, iptr. A pointer may be any of the existing data types, even user defined data types. In Pascal, the caret, ^ is used to indicate a pointer declaration, i.e. it dereferences the pointer variable. In line 4, we dynamically create and allocate space for iptr on the heap using the function new(). The starting address will be placed in iptr. The statement on line 5 stores the integer 7 in the memory location pointed to by iptr. On line 7, the memory associated with the integer pointer iptr is destroyed using dispose(). When a pointer is disposed of, it won’t be possible to use it again. If a pointer isn’t being used yet, it should be assigned a nil value. Here is a visual depiction of what is happening:

Another way of looking at this is through a memory diagram that shows where each of these items is in relation to the stack and heap memory segments.

Another way of using pointers is to associated them with other variables. Here is an example, which associates the pointer variable iptr with an integer variable n.

1   program pointer;
2   var iptr : ^integer;
3          n : integer;
4   begin
5      n := 124;
6      iptr := @n;
7      writeln('the value of iptr is ', iptr^);
8      iptr^ := 24;
9      writeln('the value of n is ', n);
10  end.

On line 6, we can assign the address of variable n to pointer variable iptr using the address operator (@). The pointer is used to manipulate and access the data item. When this program runs, the output is:

the value of iptr is 124
the value of n is 24

The pointer variable iptr is linked to n, and as such has access to its value, 124. So on line 8, when the value of iptr is modified to 24, this actually modifies the value of n. How a pointer holds the address of a variable can be represented visually in the following manner:

In this example, the normal variable, n,  has the address of 0XBA75, and holds the value 24. The pointer variable, iptr, has its own address, 0XBA81, but stores 0XBA75 which is the address of variable n.

Now those who are wondering about the choice of Pascal to explain pointers, and asking why? Inherently because Pascal pointers are simple. They are type safe (a pointer of one data type can only be assigned to the pointer of another data type), they can never be assigned to non-pointer variables, and pointer arithmetic is not permitted. Note that newer versions of Pascal such as Free Pascal have allowed the C pointer nonsense to creep in: it allows arrays to be created in a similar way, and supports pointer arithmetic.

Recursive patterns – the Cantor set

The Cantor set is an infinite set defined by German mathematician Georg Cantor (1845-1918) in 1883 (Note that a version of the set was defined earlier by Irish mathematician Henry John Stephen Smith in 1875 [1]). The basic Cantor set is an infinite set of points in the unit interval [0,1]. The design may have been influenced by Egyptian architectural designs, as Cantor had a keen interest in Egypt. For example in the figure below which shows the biological recursion of the lotus flower atop the columns of the Temple of Dendur.

The Cantor set found on Egyptian columns.

The rules for his infinite set are:

  1. Start with a simple line.
  2. Erase the middle of the line.
  3. Repeat step 2 with the remaining lines, over, and over, and over.

This of course offers an exceptional introduction to the notion of recursion. The process can continue to the point where the line segment becomes 1 unit in length, and therefore from a practical viewpoint the recursion stops. From a theoretical viewpoint, it can continue forever.

The first step in creating a recursive algorithm for Cantor is deciding how the algorithm will function. We know from the rules of the infinite set that the idea is to remove the middle ⅓ of the line, however the easier way to achieve this is to “split” the line drawing the left and right thirds. This means that if we initiate the algorithm using a line starting at position (x,y), with a length z, line(x, y, x+z, y), then the next iteration of the algorithm would draw two “sub-lines”, of the form (written in Processing):

void cantor(float x, float y, float z)
{
   line(x, y, x+z/3, y);
   line(x+z*2/3, y, x+z, y);
}

Now a non-recursive algorithm could be derived using these two calls to line(), however this does involve some tiresome math for every iteration of the loop. Not exactly ideal. However the two lines above form the basis of the recursion. Instead of using them to draw an individual line, they can be used to process the recursion. Let’s convert the function above to a recursive one. It requires a few tweaks to the function. Obviously the value y only needs to be passed once, and the exact position of the line segment can be passed as the starting point (x,y), and length (z).

void cantor(float x, float y, float z)
{
   cantor(x, y, z/3);
   cantor(x+z*2/3, y, z/3);
}

Of course if this were run, it would do nothing (except chew up processor resources). We have to add back a call to line() to actually draw something, and increment the value of y (otherwise the lines in the recursive calls will overwrite each other).

void cantor(float x, float y, float z)
{
   line(x, y, x+z, y);
   y = y + 20;
   cantor(x, y, z/3);
   cantor(x+z*2/3, y, z/3);
}

Now there is one final thing to do, add an end condition, so the recursion can terminate. This we can add by adding an if statement to terminate once the length of a line is less than zero.

void cantor(float x, float y, float z)
{
   if (z >= 1){
      line(x, y, x+z, y);
      y = y + 20;
      cantor(x, y, z/3);
      cantor(x+z*2/3, y, z/3);
   }
}

Here is an example of this running in a Processing with the following initial call:

cantor(10, 20, width-20);
  1. Cantor, G., “Uber unendliche, lineare Punktmannigfaltigkeiten“, (On infinite, linear point-manifolds (sets)“, Mathematische Annalen, 21, pp.545-591 (1883). (long, in German, no pictures!).
  2. Smith, H.J.S, “On the integration of discontinuous functions”, Proc. of the London Mathematical Society, 6, pp.140-153 (1874/75).

What is elegant code?

Code can be beautiful, so how does this differ from elegant code? Elegant code uses cleverness to accomplish something in much less code than most people would think possible, but in a way that’s readable and obvious. In some respects, the recursive form of Towers of Hanoi is elegant. It uses the cleverness of recursion to produce a program able to solve the problem in much less code than the non-recursive version. Is the solution obvious? Yes, to a point. If you don’t understand the intricacies of recursion, this version of ToH will be difficult to comprehend.

Here is another example, the Fast Inverse Square Root code, found in the game Quake. All it does is basically calculate (1/√x). You’re thinking, why not calculate the square root, and divide in the regular way? Well, division, and square root calculation are expensive (or at least they use to be). So looking for a better way of performing this in a program which may use the calculation millions of times a second may be important – nobody likes slow games. Here is the code for the function invSqrt(), (obviously in C), derived from the original code:

1  float invSqrt(float x){
2     long i;
3     float x2, y;
4     x2 = x * 0.5F;
5     i = *(long*)&x;
6     i = 0x5f3759df - (i >> 1); 
7     y = *(float*)&i;
8     y = y*(1.5F - x2*y*y);
9     return y;
10 }

It calculates the inverse to the square-root using only multiplication and bit-shift operations. Fast? Yes. Elegant? Yes, but again to a point (you have to have a working knowledge of bit-shift operations). To the lay-person it does seem like some form of bizarre Harry Potter magic. It isn’t exactly readable.

But how does it work? Basically it takes the the real number, and negates and halves the exponent to obtain an approximation of the inverse square root. It then calculates one round of Newton’s approximation to refine the estimate. Let’s take a quick closer look at what is going on. In C, a float is stored as a 32-bit number: 1 bit for the sign, 8 bits for the exponent and 23 bits for the number.

  1. The line of code on Line 5 of the function represents the 32-bit float as an integer, stored in the long int variable i.
  2. In Line 6 of the code, the 32-bit integer is shifted right by 1. This basically divides the integer representation by 2, but does crazier things if the same number were interpreted as a float.
  3. In the same line, the bit-shifted number is subtracted from 0x5f3759df, a so-called “magic number” (see there *is* magic involved).
  4. In Line 7, the integer is re-interpreted as a float (y), and its value will hold an initial guess of the inverse square root of x.
  5. Lastly in Line 8, one iteration of Newton-Raphson iteration is performed, to refine the estimate.

Does it work? Yes. Is it truly elegant? That, is in the eye of the beholder.

Does brace style matter?

One of the problems with code styling in C-deriative languages is brace styling. Languages such as Fortran and Ada avoid this problem by using control structure terminators in the form of keywords like end. For example, here is a nested loop containing an if statement in Fortran.

integer :: i, j
integer, dimension(100,100) :: image
do i=1,n
   do j=1,n
      if (i == j) then
         image(i,j) = 1
      end if
   end do
end do

The styling here is fairly consistent, where the end aligns with the start of the control structure. Consistency is good. In C, however the “block” structure associated with a control structure is derived using curly braces, { and }. Here is where the inconsistencies begin. One of the most common, (and likely the oldest) is K&R brace style, made popular by Kernighan and Ritchie in their book “The C Programming Language“. Here is K&R style applied to the above code, rewritten in C:

int i, j, image[100][100];
for (i=0; i<100; i=i+1){
   for (j=0; j<100; j=j+1){
      if (i == j){
         image[i][j] = 1;
      }
   }
}

There are some benefits to this method, mostly the fact that it uses less screen space, and the closing brace matches up with the position the control structure begins. The main limitation is that the braces don’t line-up visually, so there is more challenge visually matching them. The second most commonly used system is the Exdented Brace style. This style basically moves the opening brace to a line of its own.

int i, j, image[100][100];
for (i=0; i<100; i=i+1)
{
   for (j=0; j<100; j=j+1)
   {
      if (i == j)
      {
         image[i][j] = 1;
      }
   }
}

This format provides more of an uncluttered visual appearance, with the braces in alignment. The downside is that it does take up more vertical space, which pushes the code out more. This is sometimes wasteful when the code is small. The last type of style is the indented brace style (also known as the Whitesmith style). This is more of an outlier, and honestly not seen that often. It’s quite an oddity, but here is an example:

int i, j, image[100][100];
for (i=0; i<100; i=i+1)
   {
   for (j=0; j<100; j=j+1)
      {
      if (i == j)
         {
         image[i][j] = 1;
         }
      }
   }

it is obviously something visually confusing. Which is best? Choose K&R or Exdented Brace, and use it consistently.