Computing phrases and meanings

Clearly – It can be shown with half a day of coding.

Details omitted – The programmer couldn’t figure out what the code does.

It is not difficult – It is very difficult.

It is easily seen that – It likely doesn’t work.

Ingenious code – Code the programmer could not understand.

Interesting – boring.

Guru – Any programmer smarter than most programmers.

Obviously – It can be coded in about 1000 lines.

Similarly – We could code it, but are too lazy.

Well-known – A program whose source cannot be located.

Recursion – the various types

What are the various types of recursion, or rather how can they be classified?

Linear recursion

The simplest form of recursion, and likely the most commonly used, is linear recursion. In linear recursion, each recursive case is comprised of a single self-recursive call. That means a recursive function simply calls itself until it reaches a termination condition, i.e. a base case. A classical example of linear recursion is of course the Factorial.

int factorial(int n){
   if (n == 0)
      return 1;
   else 
      return n * factorial(n-1);
} 

Tail recursion

This is a specialized form of linear recursion where the recursive call is the last call of the function. After the call the recursive function performs nothing – the function has to process or perform any operation at the time of calling and it does nothing at returning time. This type of recursion is usually more efficient because an intelligent compiler will automatically convert this recursion into a loop to avoid nested function calls. The factorial can also be formulated using tail recursion.

int factorial(int n, int p){
   if (n == 0 || n == 1)
      return p;
   else 
      return factorial(n-1, n*p);
} 

Head recursion

Basically the opposite of tail recursion, head recursion occurs when the recursive call is the first statement in the function. Below is the head-recursive version of the factorial.

int factorial(int n, int p){
   if (n > 1)
      return factorial(n-1, n*p);
   else
      return p;
}

Mutual recursion

Also known as indirect recursion, in this form of recursion two or more functions call each other in a circular way. In reality, the use of mutual recursion is not that common. It is used in more complex things like the recursive descent parsers. Here is a simple example that uses mutual recursion to determine if a number is odd or even.

int odd(int n){
   if (n == 0)
      return 0;
  else 
      return even(n-1);
}

int even(int n){
   if (n == 0)
      return 1;
  else 
      return odd(n-1);
}

Binary recursion

A form of recursion in which at least one of the recursive cases is comprised of two self-recursive calls. This is very useful in processing certain types of data structures, like trees. The most classic example of binary recursion is the binary recursive version of Fibonacci.

int fibonacci(int n){
   if (n == 1 || n == 2)
      return 1;
   else
      return fibonacci(n-1) + fibonacci(n-2);
}

Exponential recursion

At least one of the recursive cases is comprised of multiple self-recursive calls (sometimes called multiple or tree recursion). Binary recursion is a sub-class of exponential recursion where the number of calls is two.

Nested recursion

Also known as embedded recursion, this is a form of recursion in which at least one of the recursive cases is comprised of a self-recursive call which has a recursive call as one of its parameters. This basically means “recursion inside recursion“. A perfect example of nested recursion is Ackermann’s function.

Failure is okay

Too many people fret about failure. They are inherently afraid of it. That’s partially because so many young people have never really experienced failure. When was the last time a child failed an assignment in elementary school? It rarely seems to happen anymore. Not that you want it to happen, but if it does, it likely shouldn’t be rewarded the way it is now.

Failure is always considered to be a bad word, a horrendous experience, but it doesn’t have to be. When some things fail, it just means that they didn’t work, and another approach has to be taken to solve the problem. Everyone will fail at something, sometime. It’s inevitable, because there is no such thing as a perfect person. Both LEGO and Apple are successful companies, but that doesn’t mean they didn’t have failures. You can bake a cake, and it can fail to rise properly, but that doesn’t mean you should give up baking.

A failure is just a blip in the road. The road still goes on, it’s just a matter of finding a new way to continue. Sometimes the problems that need to be solved aren’t hard to solve, and in other cases it can take years. But failures rarely define us as individuals.

Nature’s algorithms are better

We think our algorithms are innovative, interesting, effective. But in reality they aren’t. The coolest algorithms are found in nature, from the way plants grow, to how ants live. Take flowers for example. What sort of algorithms does nature use to determine the size, shape and colour of a flower? Is it based solely on the need for pollination, or is there some sort of reason for flowers to be as aesthetically pleasing as they are? Do bees care about beauty? Flowers didn’t exactly evolve over the millennia purely for humans to admire their beauty (but we have pushed it along the way with cross-pollination). Why are some flowers so simple, and others so complex?

Why are flowers so intricate and beautiful?

Even vegetables are complex entities. Most modern vegetables have evolved by means of cross-breeding over thousands of years. But some are still marvels of nature’s algorithms. For example the green fractal cauliflower (or broccoli) known as Romanesco. This inflorescence supposedly originated in the Lazio region of Italy in the 15th century – I would imagine by means of spontaneous mutation (similarly to how orange cauliflower was found in Canada in 1970), because there is no way humans could have create the intricate fractal pattern in this plant (although its pattern could have been enhanced by selective breading).

The beautiful green fractal cauliflower that Rey eats at Maz Kanata’s place in The Force Awakens

Nature continues to create interesting biological algorithms. If we thought more about the natural world, rather than the electronic world we would gain an inherently better understanding of life in general.

It turns out that bees use a combination of colour and patterns to distinguish between different flowers.

The algorithms of antiquity : Egyptian multiplication

The Rhind Mathematical Papyrus is sometimes called the “Ahmes Papyrus” in honour of the scribe who compiled it. The papyrus is from the Egyptian Middle Kingdom and dates to around 1650 BC. It was purchased at a market in Luxor by archaeologist Alexander Henry Rhind in Egypt in 1858 and placed in the British Museum in 1864 by his estate, thus it bears his name. It is a papyrus scroll, approximately 32cm wide and 200 cm in length, and was found in a tomb in Thebes. It may be the most valuable source of information we have about Egyptian mathematics. The hieroglyphs on the papyrus were deciphered in 1842.

A portion of the Rhind Papyrus

The Rhind papyrus was filled with a series of problems. One of the interesting algorithms is how Egyptians multiplied numbers. Firstly there is no 0. The Egyptian number system used symbols to represent numbers. For example a vertical stroke, |, to represent 1, a ∩ to represent 10, etc. So for example, 32 would be represented by ∩∩∩||. For multiplication the Egyptians used a system of doubling and halving. For example, to multiply 32 by 15:

Halve first numberDouble second number
3215
1630
860
4120
2240
1480
32×15 = 480

If it’s not possible to divide the first number evenly. you subtract 1 before the first halving. For example to multiply 33 by 15:

Halve first numberDouble second numberLeftover from odd
first number
33-1 = 321515
1630
860
4120
2240
1480480
SUM495
33×15 = 495

What about a more complicated multiplication? Say 35 by 17:

Halve first numberDouble second numberLeftover from odd
first number
35 = odd, 35-1 = 341717
34/2 = 17 = odd, 17-1 = 163434
16/2 = 868
8/2 = 4136
4/2 = 2272
2/2 = 1544544
SUM595
35×17 = 595

There is also information on Egyptian fractions, even how to multiply them.

Further Reading:

  1. Mathematics in Egyptian Papyri
  2. Chace, A.B., The Rhind Mathematiucal Papyrus (Vol.1) (1927)
  3. Newman, J.R., “The Rhind Papyrus”, Scientific American, 187(2), pp.24-27 (Aug.1952)

The algorithms of antiquity : Euclid’s GCD

Euclid of Alexandria (325–265 BC) was a Hellenistic mathematician considered by many to be the father of geometry, and one of the greatest mathematicians of antiquity. It is believed Euclid lived in Alexandria during the reign of Ptolemy I – after Plato and before Archimedes. Euclid is best known for his 13-book series Elements. But Euclid may be best known for his algorithm to derive the greatest common divisor of two numbers (at least in the computing community).

Raphael’s impression of Euclid, in The School of Athens (1509–1511)

In his book Euclid’s Elements (Book VII, Propositions 1 & 2), he described the Euclidean algorithm used to determine the “greatest common divisor” (GCD) of two integers, one of the oldest known algorithms. Proposition 2 was:

“To find the greatest common measure of two given numbers not relatively prime.”

Here is the basic technique.

Let a and b be the quantities, the smaller of which is b.
Let a be divided by b, with a remainder c.
Let b be divided by c, with a remainder d.
Let c be divided by d, with no remainder.

Then d is the GCD of a and b.

Let’s try that with a simple example, the GCD of 60 and 24:

GCD(60,24) = 60 / 24, rem = 12
GCD(24,12) = 24 / 12, rem = 0
GCD = 12

Or a more complex example, the GCD of 1220 and 516:

GCD(1220,516) = 1220 / 516, rem = 188
GCD(516,188)  = 516 / 188, rem = 140
GCD(188,140)  = 188 / 140, rem = 48
GCD(140,48)   = 140 / 48, rem = 44
GCD(48,44)    = 48 / 44, rem = 4
GCD(44,4)     = 44 / 4, rem = 0
GCD = 4

Some suggest it was possible that the algorithm was not discovered by Euclid, but likely from an earlier mathematician (his book Elements, was essentially a compilation of mathematical work). The first GCD program appeared in 1960.

Further Reading:

  1. Forgotten algorithms: Euclidean Algorithm – the first appearance of GCD in code.

Kids learning to code might be a fruitless venture

In a recent article in The Atlantic, So Much for ‘Learn to Code’, author Kelli María Korducki postulates that in the age of AI, computer science may no longer be the safe major it once was. In all likelihood, this sentiment is true. What once was considered a “golden” degree is no longer. AI will undoubtedly reduce the need for CS majors, or at least cut off the bottom tier of students, the ones that got a degree, but didn’t really do too well in any of their classes. The great migration into computer science degrees may come to an end, if not straight away, then in the next decade. The reality is that there is not the same future for mediocre CS students are there once was. To get somewhere with a CS degree in the future you will have to show some propensity towards a “think outside the box” skill-set, rather than a “coder” mentality, because the machines will be able to do the coding. Not the high-level though mind you, but the low-level grunt work.

Where does that leave us with the whole concept of getting kids to learn coding. It may now be a somewhat fruitless venture. The level of coding supposedly taught in elementary schools will not really amount to anything. If we concentrated more on problem solving, then we might actually have a valuable pedagogical tool, but learning to code is not viable. We should have done it sooner, I mean we had the last 20 years to make a lasting impression, but the timing of these new elementary school initiatives isn’t exactly great. What will elementary students be able to do with a but of coding knowledge? Not much – they certainly aren’t going to write apps for the iPhone. At least reading, writing and arithmetic provide foundational skills. Will learning to code make students better prepared for a world full of computers? Unlikely, because the basic skills they have learned have already been supplanted by AI. We should concentrate on problem solving skills that would allow students to flourish in any environment they find themselves in, rather than pigeon-holing them into “coding”.

The idea that kids need to learn to code is wrong. Kids would actually be more successful if they understood how to do math properly, or had more language skills, or even a broader sense of the world around them. Do we really want to raise children that are so engrossed in technology that they don’t perceive the world around them? That they lose the ability to be story-tellers because they perceive everything around them in a binary world? Some hearken coding to the new literacy, but it just doesn’t come anywhere close. Being able to read, and write opens up a world of possibilities. Learning to code just doesn’t – I mean learning to problem solve does, but few people ever really concentrate on that aspect of things. University CS programs don’t even give much credence to it.

Coding for kids is often presented in a very “clean” manner – problems with solutions. If you can master the syntax of a programming language, then problems can be solved quicker. This may be true of a given series of problems that already have solutions, but it’s a lot messier when it comes to problems without solutions. Software is a mixture of creativity, and a determination to derive a solution. Most of these simple problems can now likely be solved by AI, because so many solutions already exist. You can teach someone to write a program to generate the Fibonacci sequence, but does is really broaden their mind? There is very little in syntax that invokes curiosity. Computers are binary, there is no maybe – very little possibility to think outside the box.

At the end of the day, we should be embracing the everyday things that could teach our kids problem solving skills. The simple act of building a gingerbread house teaches about the ability to cook for yourself, ingredients, chemistry, engineering, and mathematics – yet we don’t consider this to be a fruitful pedagogical idea. Or perhaps developing problem solving tasks using Lego? There are many ways we could improve problem solving skills of students, but I’m afraid coding likely isn’t the best choice.