Fractal Brownian motion using random midpoint displacement

What is Brownian motion? Brownian motion is the random motion of particles suspended in a fluid (a liquid or a gas) resulting from their collision with the fast-moving molecules in the fluid. The motion is named after Scottish botanist Robert Brown who first described the motion in 1827 while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. The concept is as simple as a particle making random jumps, and tracing out a trail. Regular Brownian motion occurs when all the jumps taken by the particle are independent. In fractional Brownian motion (FBm), the steps taken by the particle are correlated in time. Fractal Brownian motion is a process which is used by artists and scientists to model complex shapes that arise in nature, and artificial phenomena. Examples include dispersion of ink in water, stock prices, clouds, plants, rugged shapes of mountains, riverbeds, and fractal landscapes.

One of the simplest methods for generating a fractal profile is “midpoint displacement“. The algorithm was first described by Fournier et al. [1] as a recursive subdivision algorithm for generating an approximation of an FBm.

Fournier and colleagues [1] included a couple of Pascal programs to calculate parametric and nonparametric subdivision (they are not explained as well as they could be in the paper). The Processing code below is based loosely on those programs, and includes two subprograms, fractal(), and subdivide(). The parent subprogram fractal() has a number of parameters: (x1,y1) and (x2,y2) describe the two points which denote the line. The remaining two parameters, h and scale help define the scaling of the subdivisions.

  • h is a parameter which determines the “fractal dimension” of the output sequence, and essentially describes the degree of geometric irregularity. It has values between 0 and 1. As h approaches 0 irregularity is high, as h approaches 1 it is low (smoother).
  • scale is the roughness parameter of the curve. 0 = straight line, and >0 implies increasing roughness.
void fractal(float x1, float x2, float y1, float y2, float h, float scale){
  float std, ratio;
  ratio = pow(2.0,-h);
  std = scale * ratio;
  subdivide(x1,x2,y1,y2,std,ratio);
}

The subprogram subdivide() performs the recursive subdivisions. If the difference between x-coordinates is greater than a certain value, epsilon, then the line is subdivided (epsilon=2). This is achieved by calculating the midpoint of both x (xmid) and y (ymid) coordinates. The ymid position is modified using both std, and a Gaussian random number with zero mean and unit variance. The subprogram subdivide() is then recursively called on the left and right intervals from the midpoint.

void subdivide(float x1, float x2, float y1, float y2, float std, float ratio){
  float xmid, ymid;
  if (x2-x1 > epsilon){
    xmid = 0.5 * (x1 + x2);
    ymid = 0.5 * (y1 + y2) + std * randomGaussian();
    std = std * ratio;
    subdivide(x1,xmid,y1,ymid,std,ratio);
    subdivide(xmid,x2,ymid,y2,std,ratio);
  }
  else
    line(x1,y1,x2,y2);
}

Below are some examples, showing various values of h and scale.

h=0.35, scale=10
h=0.85, scale=10
h=0.35, scale=50
h=0.85, scale=50

Not surprisingly fractalizing a line is a recursive process. A generalized algorithm is described below. Starting with a line of length n, the mid-point of the line is displaced up or down by a random amount. This is repeated for each of the two segments produced. This is essentially a random walk that connects two points, and is controlled by a few parameters. The basic algorithm of the midpoint displacement method is:

  1. Maintain an interval with endpoints (x0,y0) and (x1,y1).
  2. Divide the interval in half.
  3. Set xmid = (x0 + x1)/2 and ymid = (y0 + y1)/2
  4. Choose a displacement at random using a Gaussian distribution.
  5. Apply the algorithm to the left and right intervals.
An example of how midpoint displacement works.

Refs:

  1. Fournier, A., Fussell, D., Carpenter, L., “Computer rendering of stochastic models”, CACM, 25, pp.371-384 (1982).

Plankalkül – the first language (ii)

In 1976 Donald E. Knuth and Luis Trabb Pardo produced a technical report titled “The Early Development of Programming Languages” [1]. In it they surveyed the evolution of high level programming languages from 1945-1958, describing each languages principal features, and comparing them using a particular algorithm which they called the “TPK algorithm”. Below is a reproduction of the program implemented in Plankalkül, with relevant components of the program highlighted. It provides one of the best (and simple) illustrations of a Plankalkül program [1].

TPK Algorithm

Now for a brief explanation. LIne 1 introduces the data types A2 which is basically an ordered pair comprised of an integer and a floating-point component. The lines in ② and ③ comprise the function f(t), while the lines in ④-⑦ comprise the main program TPK. In reality the program is only 7 lines in length, with each operation spanning several lines. Operations are performed on “Resultatwerte“, or output variables, “Variablen“, or input variables, and “Zwischenwerte” or intermediary variables.

Procedure P1

Lines 2-4 indicate that P1 is a procedure that takes V0 (of type A∆1, i.e. floating-point) as input and produces R0 of the same type. Lines 5-7 perform the functions calculation.

Main Program P2

At the start of the main program P2, lines 8-10 map V0 (of type 11×A∆1) into a result R0 (of type A2). This basically maps a vector of floating-points to a vector of ordered pairs. The W2(11) on line 11 specifies a for loop that iterates from n-1 down to 0. The notation on line 11, namely R10(x) implies the result R0 of applying P1 to x. Lines 15-18 basically mean:

if Z0 > 400 then R0[10-i] = (i,+∞)

Lines 19-22 are similarly defined:

if Z0 <= 400 then R0[10-i] = (i,+∞)

There was no else statement, hence the use of the bar over “Z0>400“, indicating negation.

Knuth and Pardo provide a base algorithm for TPK implemented in Algol-60. I have reproduced the algorithm in Pascal for comparison to the Plankalkül program.

program tpk;

uses Math;

var i : integer;
    y : real;
    a : array[0..10] of real;

function f(t: real): real;
begin
   f := sqrt(abs(t)) + 5 * power(t,3);
end;

begin
   for i:= 0 to 10 do
      read(a[i]);
   for i := 10 downto 0 do
   begin
      y := f(a[i]);
      if (y > 400) then write(i:4,'   TOO LARGE')
      else write(i:4,'   ',y:0:2);
      writeln
   end;
end.
  1. Knuth, D.E., Pardo, L.T., “The Early Development of Programming Languages“, STAN-CS-76-562, Computer Science Department, Stanford University (1976).

Plankalkül – the first language (i)

We often think of Fortran as the first “real” programming language, which may be an artifact of IBMs control of the programming language world in the 1950s and 60s. But there are other languages that are often forgotten. One of these, is the language Plankalkül, developed by Konrad Zuse. Plankalkül is likely the first programming language ever designed.

By early 1945 Zuse, living in the small Alpine village of Hinterstein, began work on a “universal” language which could be used to express computations. He named this language Plankalkül, which means “program calculus”. Although the work was done in 1945, it was not published until 1972. It was a comprehensive language, an extension of Hilbert’s Aussagenkalkül (propositional calculus), and Prädikatenkalkül (predicate calculus). In an interview [2] with Konrad Zuse in 1981, he said “I saw it is not good to leave development of the computer only to mathematicians“. When asked why, he responded:

“The mathematicians make the world seem much too theoretical. For instance, in 1945, when I was in a small village after the end of the war in the Alps, I had nothing to do – surely the only thing was to survive. [It was then] that I had time to make my theoretical developments. By that I mean the first programming language for computers. This was especially organized for practice. And 10 years later, we had a big series of languages – very complicated. Even today, they are very complicated.”

Backus [1] described Plankalkül as “perhaps the most sophisticated programming language that was to emerge up to the mid 1960s. It was a real dilly. A very powerful language with data types and all kinds of things.

Plankalkül was an incredibly complete language. The simplest data type was a single bit, which was used to build types for both integer and real numbers. In addition Plankalkül also included both arrays and records. Interestingly, Plankalkül was a language that didn’t include a goto statement, but instead included an iterative structure similar to a for statement. It included a statement Fin, which used a superscript to specify a jump out of a given number of iteration loop nesting or to the beginning of a new cycle of iteration. Plankalkül included a selection statement, but sans the else clause. Here are the highlights of Plankalkül:

  • Introduced the assignment operation, V1 + V2 R1
  • Programming “plans”, and sub-programs (procedures).
  • Programs are non-recursive functions.
  • Conditional statement, i.e. basic if.
  • Iteration, i.e. a while-like construct.
  • No recursion or goto statement.
  • One elementary datatype – the bit.
  • Data types (boolean, integer, real, complex) types.
  • Data structures: arrays, records, hierarchical structures, list of pairs.
  • Type of a variable is specified when used.
  • Operations of predicate logic and Boolean algebra.
  • Complicated expressions with parentheses and nesting.

The only thing that likely would have made Plankalkül notation hard to implement was the fact that each statement consisted of 3-4 lines of code. The first line resembles the actual equation and is similar to what one would find in a contemporary language. The line labelled “V” is used to identify subscripts for variables named on the top line. The “K” line is used to denote components of a variable. The bottom line of each group is labelled “A” or “S“, and is used to specify the type of each variable, e.g. 2=type A2, 9=integer. Something like 1.n implies an integer of n bits. Here is an example of adding 1 to the variable Z1 [6]:

  | Z + 1 ⇒ Z 
V | 1       1
S | 1.n 1.n 1.n

The operator ⇒ is used to denote assignment, while ⟶ with a ⋅ beneath it represented a conditional, or “if”. The “W” operator is used for iteration. Here is another example illustrating A(7) = 5 * B(6).

  | 5 * B ⇒ A 
V |     6   7
S |     1.n 1.n

Zuse’s work contained programs of far greater complexity than anything which appeared prior to 1945. In included programs to sort number arrays, test the connectivity of a graph, perform integer and floating-point operations, and perform syntax analysis of logic formulas. There are also 49 pages dedicated to algorithms for playing chess. Sadly, after 1949 Zuse had little time to dedicate to the applications of Plankalkül, as he was too busy with his fledgling computer company. How would programming language design evolved had Zuse’s work been widely known in 1945, instead of 1972?

An evaluation of Plankalkül can be found in [3] for the interested reader. Zuse’s original handwritten manuscript [4] detailing Plankalkül can be found online at the Deutsches Museum.

  1. Backus, J., “Programming in America in the 1950s – Some personal impressions”, A History of Computing in the Twentieth Century, Metropolis, N., Howlett, J., Rota G-C., (eds), pp.125-135 (1980)
  2. Schultz, B., Elmauer, E., “Konrad Zuse – An Interview with the Inventor of the World’s First Fully Functional Programmable Digital Computer”, in The History of Computing, Zientara, M. (ed.) pp.35-42 (1981)
  3. Bruines, B., “Plankalkül“, Bachelor’s Thesis, Radboud University (2010)
  4. Zuse, K., Theorie der angewandten Logistik, NL-207-0233 (1945/46)
  5. Gilio, W.K., “Konrad Zuse’s Plankalkül: The first high-level, “non von Neumann” programming language”, IEEE Annals of the History of Computing, 19(2), pp.17-24 (1997)
  6. Bauer, F.L., Wössner, H., “The ‘Plankalkül’ of Konrad Zuse: A forerunner of today’s programming languages”, CACM, 15(7), pp.678-685 (1972)

Recursion and Matryoshka dolls

One way of illustrating recursion is by using visual or tactile experiences. An example of a tactile experience is the use of Russian nesting dolls, or Matryoshka dolls, which is a set of wooden dolls of decreasing size placed one inside the other. The dolls were created in the 1890s by toy workshop Children’s Education in Abramtsevo, founded by patron of the arts Savva Mamontov.

They can be used both in the context of deriving a recursive function to count the number of nested dolls, and also for illustrating the notion of a stack as it relates to recursion, and unwinding. In its simplest form, a tactile recursive algorithm can be derived to find the smallest doll. If the doll can be opened, then the doll inside can either be opened or not. If it can’t be opened it is the smallest doll. If it can be opened, the process repeats. The algorithm would look something like this:

  1. Open the doll.
  2. Can the doll inside be opened?
    • Yes. Go to Step 1.
    • No. It is the smallest doll.
Counting Matryoshka dolls

This seems somewhat simplistic, because the input to a function which does the counting would be the actual value output, but it does provide a visual medium which can be used to construct the function, and removing the dolls is a recursive type activity (although it is inherently iterative as well).

int countMatryoshka(int n){
   if (n == 1)
      return 1;
   else
      return 1 + countMatryoshka(n-1);
}

Russian nesting dolls offer an interesting analogy to the stack. Starting with all the dolls separated, the bottom portion of the dolls represents a function activation, whilst the top portion represents the completed function. The first call places the bottom half of the largest doll, the next call places the bottom half of the second largest doll inside the largest doll, and so on until the limit of the dolls has been reached – the top of the stack, represented by the smallest doll. At this point the top portion is joined to the bottom portion of each doll, “completing” each function in turn.

Illustrating the stack.

However, some believe the Russian nesting doll analogy is somewhat flawed. This is because the doll analogy does seem to equate simpler with smaller. The thought here is that a recursive subroutine is described in terms of a simpler version of itself, but there is nothing to stop the progeny of the recursive procedure being larger, e.g. Ackermann.

What’s wrong with CS degrees?

There are some fundamental issues with CS degrees, more so than most other degrees offered in universities. Why? Because computer science is highly aligned with the computing industry, and most people who get a CS degree end up working in the industry. As such it is easy to poke holes in the inadequacies of CS educational models, because industry often does not get the kind of graduate they expect. So why are CS degrees often lacking?

  • Too much theory. Theory is nice in an abstract sense, but for a career in computer science, some of the concepts taught are just too obscure.
  • The use of rare academic languages. There are a lot of institutions which teach practical languages like C and Java. Some languages however just aren’t practical, or are uncommon – languages like OCaml, and ML. Sure some people in industry may use them, but they are not the norm. Some people like to use obscure languages to satisfy their own tech cravings.
  • The lack of knowledge in languages. Many institutions have opted for what some might term a mono-culture – using one language throughout the curriculum, with little exposure to other languages. Sure learning C or Java for 3-5 years makes you more proficient, but it also tends to ingrain a “X can do everything” mentality. There is also a lack of exposure to older languages like Fortran, and Cobol which are still highly relevant, and bring forward a lot of industry-related issues like reengineering.
  • Many CS professors are academics. Sure they are, that’s a feature right? Wrong. There are seemingly very few professors who have any industry related experience. Being taught software engineering… ever wonder if the prof actually was involved in any industry-based large scale software project?
  • Some courses are too esoteric. Sure it’s interesting to learn about red-black trees and B-trees, but do they have any real relevance in real life? How many people actually code their own data structures in industry vs. using libraries. Compilers may have been an interesting course once, but in reality few people build them from scratch anymore. How relevant are these subjects anymore? Many times courses appear as pet interests from professors.
  • Academia breeds conceit. People in academia are taught very confined ways of doing things. In large this is because there is very little in the way of out-of-the-box thinking. People tend to believe that if they have language X, it will fix any problem. They may also believe that all software should be done using OO. Heaven forbid.
  • Modern skills are ignored. It takes time to change a curriculum, and the curriculum has got to want to change. It takes a long time to add something like app design, and by the time it is added things often change. That’s the problem when you’re dealing with an evolving discipline.
  • Lack of interdisciplinary knowledge. Some degrees don’t really have a lot of diversity when it comes to knowledge outside of CS. I’m not talking math, or STEM, but rather business, fine arts, humanities, and social sciences. CS does not exist in isolation, so only doing STEM based subjects leads to a somewhat biased view of the world. Many CS students could learn a lot from taking a writing class or two.
  • Academic use-by-date. There are academics who have never taught a programming class, or an undergrad class. Most barely have any industry experience, and teach skills from a textbook. How can somebody who has never built a large scale OO-based project know how well it will do?
  • Academia is not the real world. Most people with CS degrees end up working in CS, and quickly realize that the real world is very different from what they learned in academia. I have seen this with students who have done a number of coop terms – by the time they reach their final year they have a job lined up, and have checked out of courses, doing enough to get a good grade, but not overextend themselves. It’s probably the reason why many good students aren’t interested in grad school. I once had a student tell me they learned more at the Apple Developer’s Conference in one week than they had in many years of education.

Quora – adventures in stupidland

I started answering questions on Quora recently. I don’t exactly know why. Some questions, regardless of the topic are just stupid. Makes me wonder if it’s a bot writing them… or are people really *that* stupid? Either that or they don’t know how to use Google (or any other search engine) to actually search for answers. Sometimes I feel like answering “Have you Googled that?”. Recently there was a question “I wrote a program and it won’t compile, what do I do?“. I felt like replying that they should just forget about programming. I mean, if you write a program, and get to the stage of compiling it, you must have an inkling of what is going on? Right? Of course many peoples answers were quite practical, walking them through the steps. My response was likely a little harsher.

When you write a program, you must have some idea what is going on? You must understand the basics of the language you are coding in, and how things work. If you just copy-and-pasted the code from elsewhere I can imagine why things go wrong. It doesn’t take much for a simple ” to be misinterpreted in the code, and it won’t compile. It will look right, but won’t be right. When the code is compiled it will produce error messages, which are not always great but do give some sort of indication of what is wrong. You can always Google the syntax error as well. Learning to program is a process of trial-and-error. You write a small program, you compile it, and continue fixing the errors until it compiles. Does it produce the right results? No – then fix the logic errors. Practice, practice, practice.

Here are some of the other questions.

  • How do I write a C program to add two floating point numbers?
  • How can I convert a for loop to a while loop?
  • How do I convert code to pseudo code?
  • How can I change the name of a variable in a loop in Python?
  • How do I program in C?
  • What is temp in C programming?

Shockingly people actually answer these sort of questions with the full code. I don’t know why, I guess they are trying to help. Problem is some of these questions can be easily answered by actually working through a programming guide, or a coding related website, I mean its not like the internet isn’t full of them. I’m not suggesting having a novice programmer read Kernighan and Ritchie’s “The C Programming Language“, but there are excellent tutorials on programming on tutorialspoint.

There there are the questions that you just have to shake your head at:

  • Why can’t I find any real photo of a compiler? Is it too small?
  • How can I write a C program that deletes itself?
  • What is the most complex C program ever written?

The more the internet exists, the more it seems to get filled up with stupidity. When I started out in computing we had the Usenet news groups, and people asked questions, but they rarely dared to ask simple questions that could have been looked up in a reference manual.

Peasant multiplication algorithm (ii) : Recursion

The iterative algorithm can also be converted to a recursive one. Below is a Fortran implementation of a recursive version of the algorithm, shown as the function times(m,n). Here m will be the value halved, as n is doubled.

recursive function times(m,n) result(r)
   integer, intent(in) :: m,n
   integer :: r

   if (m .eq. 1) then
      r = n
   elseif (m > 1 .and. mod(m,2) .eq. 0) then
      r = times((m/2),(n+n))
   elseif (m > 1 .and. mod(m,2) .eq. 1) then
      r = n + times((m/2),(n+n))
   end if

end function times

The base case for the recursion is found on line 5, when the value of m halved becomes 1. The recursion now terminates and backtracks. The other two cases deal with cases where m is greater than 1, and either odd or even. In the case of m that is even (line 7), it is skipped, and the next recursive call invoked. In the case of m that is odd (line 9), the value of n and the value of the next recursive call is returned. For example if times(15,37) is called, then the recursive tree looks like:

times(15,37) = 37 + times(7,74)
times(7,74)  = 74 + times(3,148)
times(3,148) = 148 + times(1,296)
times(1,296) = 296

times(15,37) = 37 + 74 + 148 + 296 = 555

Here is another example for times(17,4):

times(17,4) = 4 + times(8,8)
times(8,8)  = times(4,16)
times(4,16) = times(2,32)
times(2,32) = times(1,64)
times(1,64) = 64

times(17,4) = 4 + 64 = 68

Refs:

  1. Seppala-Holtzman, D.N., “Ancient Egyptians and Russian peasants foretell the digital age”, Mathematics Teacher, 100(9), pp.632 (2007).
  2. Stern, C., Stern, M.B., “Comments on ancient Egyptian multiplication”, The Arithmetic Teacher, 11(4), pp.254-257 (1964)

Peasant multiplication algorithm (i)

This algorithm is a method of multiplying two numbers without the use of a multiplication table. It has many differing names: Egyptian multiplication, Ethiopian multiplication, Russian peasant multiplication or peasant multiplication. It is most commonly explained in terms of Russian peasant multiplication [1,2,3,4,5,6], although it stems from ancient Egyptian arithmetic [9,10,11]. The peasant multiplication algorithm (PMA) uses very simple operations: multiplying by 2, dividing by 2 and addition. It is sometimes called the doubling and halving technique. It has been documented since at least the early 19th century when it was used in elementary schools [7,8].

Consider the example of multiplying 147 by 17. Each of these numbers is placed in separate columns, and one is chosen to double, the other to halve. The algorithms proceeds as shown in Table 1. Each number in the halving column is successively divided by 2, until a value of 1 is attained. Any remainders which occur when an odd number is halved are discarded. For example 17/2 = 8 remainder 1. The numbers in the doubling column are simultaneously doubled.

multiplier (halved)multiplicand (doubled)
17147
8294
4588
21176
12352
Table 1

The product can now be calculated. First all lines with a positive value in the halving column are discarded. Then the remaining numbers in the Doubling column are added together to produce the result. The answer is 2352+147=2499.

Given this equation: multiplier × multiplicand = product, the basic algorithm is:

  1. Set product to 0.
  2. If multiplier is odd, add multiplicand to product.
  3. Halve multiplier (i.e., divide it by 2) and double multiplicand (i.e., multiply it by 2). Note that integer division is used, so remainders are discarded.
  4. Repeat steps 2 and 3 until multiplier reaches 1

Below is a Fortran implementation of the multiplication algorithm which uses two arrays to store the lists, printing out each line in a table-type form.

program peasantMult

   implicit none
   integer :: m, n, p, i, j
   integer, dimension(100) :: a, b

   write(*,*) "Peasant Multiplication"
   write(*,*) "Enter two numbers to multiply: "
   read(*,*) m, n

   a(1) = m
   b(1) = n
   ! If the halved value of m is even, omit it
   if (mod(a(1),2) == 0) then
      write(*,*) a(1),' ** ', b(1), '-> omit'
   else
      write(*,*) a(1), b(1)
   end if
   i = 2
   do
      ! Perform halving (a) and doubling (b)
      a(i) = a(i-1)/2
      b(i) = 2*b(i-1)
      ! If the halved integer is even, omit it
      if (mod(a(i),2) == 0) then
         write(*,*) a(i),' ** ', b(i), '-> omit'
      else
         write(*,*) a(i), b(i)
      end if
      ! If the halved integer is 1, algorithm terminates
      if (a(i) == 1) then
         exit
      else
         i = i + 1
      end if
   end do
   ! Sum all values of b that have a corresponding
   ! value of a which is negative
   p = 0
   do j = 1,i
      if (mod(a(j),2) /= 0) then
         p = p + b(j)
      end if
   end do
   write(*,*) p, ' is the product'

end program peasantMult

Here is a sample run of the program:

 Peasant Multiplication
 Enter two numbers to multiply:
137 7534
         137        7534
          68  **        15068 -> omit
          34  **        30136 -> omit
          17       60272
           8  **       120544 -> omit
           4  **       241088 -> omit
           2  **       482176 -> omit
           1      964352
     1032158  is the product

Why Russian peasant multiplication? Supposedly in the 19th century peasants in some remote region of Russia were discovered multiplying numbers using an interesting method. Whether or not this story is true is uncertain, although it is conceivable that this method of multiplication was being used in Russia, and maybe especially among the peasant class (the 1857 census identified peasants as composing the majority of the population, 81%).

Refs:

  1. Spickerman, W.R., “A note on the Russian peasant multiplication algorithm”, School Science and Mathematics, 71(7), pp.628-629 (1971).
  2. Gimmestad, B.J., “The Russian peasant multiplication algorithm: A generalization”, The Mathematical Gazette, 75(472), pp.169-171 (1991).
  3. Bowden, J., “The Russian peasant method of multiplication”, The Mathematics Teacher, 5(1), pp.4-8 (1912).
  4. Boykin, W.E., “The Russian-peasant algorithm: rediscovery and extension”, The Arithmetic Teacher, 20(1), pp.29-32 (1973).
  5. Seppala-Holtzman, D.N., “Ancient Egyptians and Russian peasants foretell the digital age”, Mathematics Teacher, 100(9), pp.632 (2007).
  6. Reardin, Jr., C.R., “Understanding the Russian peasant”, The Arithmetic Teacher, 20(1), pp.33-35 (1973)
  7. Lukens, H.T., “Preparation of an exercise on historical methods in arithmetic”, Francis W. Parker School Year Book, 2, pp.30-33 (1913)
  8. Karpinski, L.C., “To multiply without multiplying”, The Journal of Education, 80(15), p.411 (1914)
  9. Haines, M., “Concepts to enhance the study of multiplication”, The Arithmetic Teacher, 10(2), pp.95-97 (1963)
  10. Delaney, A.A., “An exercise in ancient Egyptian arithmetic”, The Arithmetic Teacher, 10(4), p.216 (1963)
  11. Stern, C., Stern, M.B., “Comments on ancient Egyptian multiplication”, The Arithmetic Teacher, 11(4), pp.254-257 (1964)

A recursive Bubblesort

The sorting algorithm known as Bubblesort is arguably one of the most infamous sorting algorithms, partially because it is simple, and partially because it is one of the worst performing sorting algorithms. It works by repeatedly passing through an array, comparing two items and swapping them if they are in the wrong order. A bubble sort has a worst case complexity of O(n2). This really means that if there were 100 numbers to be sorted the complexity would be O(10,000), so that if each swap took 1/1000 of a second the entire sort would take 10 seconds. Below is the code for a classic Bubblesort implemented in Fortran.

subroutine bubblesort(n,A)
   integer, intent(in) :: n
   integer, dimension(n), intent(inout) :: A
   integer :: i,j

   do i = 1,n
      do j = 1,n-i
         if (a(j) > a(j+1)) then
            call swap(a(j), a(j+1))
         end if
      end do
   end do

 end subroutine bubblesort

The way bubblesort works, at the end of one pass through the array, the largest element is guaranteed to be at the right. Therefore bubblesort seems to be an appropriate candidate for recursion because each pass through the array is essentially a sub-problem in sorting. Following the first pass through the bubblesort, we are left with the problem of sorting n-1 elements in positions 0 through n-2. Below is the recursive version of Bubblesort implemented in Fortran.

recursive subroutine rbubblesort(n,A)
   integer, intent(in) :: n
   integer, dimension(n), intent(inout) :: A
   integer :: i

   do i = 1,n
      if (a(i) > a(i+1)) then
         call swap(a(i), a(i+1))
      end if
   end do
   if (n > 2) then
      call rbubblesort(n-1,A)
   end if

end subroutine rbubblesort

Are computer science degrees archaic?

The Vancouver Sun ran an article yesterday about the state of technology workers in Canada, i.e. there aren’t enough. Part of the problem? It takes people far too long to actually get a degree and out into the workplace. A traditional degree is four years, with coop degrees adding another year. 5 years! In a technology field 5 years is too long to spend in an institution. One could also argue what the value is of some of the things learned in school. If you don’t have a coop job, or related summer work experience, or have never created an app, one could argue that it doesn’t make you very employable, and that’s the problem. It is possible that the field of computer science is at a cross-roads. Offer alternatives to the traditional path of education, OR have industry do it. If industry were to offer an apprenticeship-type model of employing and educating people there is no doubt they might take a bite out of academia… and even without a degree those people will have successful careers. Then some company may partner with a university to offer experiential degrees.

One quote from the article stands out to me:

I quite frankly don’t care if they went to school or where they went to school.

Greg Smith, CEO of Thinkific Labs Inc

It may sum up the opinion of many in the computing industry. If someone is a self-learner, and has built a portfolio of accomplishments, they obviously have enough gumption for a career in CS. Companies want to hire people who can produce results, not necessarily those who have spent 5 years sheltered in academia (this does not include coop students). Company based experiential learning may also help reverse the trend of CS being dominated by white males. Look on any campus and CS is likely one of the few programs that lacks real diversity, both in race and gender. Shockingly few universities have really spent any time trying to make CS more equitable. Universities are often blind-sided by the allure of expanding graduate programs, often to teach those from other disciplines ancillary computing skills. Unfortunately this often comes at the expense of undergraduate programs which fill up regardless of who is in them. Universities haven’t done a good enough job, or evolved to consider new ways of educating people.

Are traditional computer science degrees doomed? Likely. Why? Because computing is one of the few industries where there just aren’t enough people to fill all the jobs, a situation that will eventually be rectified by companies. Don’t get me wrong, there will always be a place for academia, but in the future it likely won’t be the driver in providing tech workers. There is still time to change of course, pivot to a new model of education. Perhaps one dominated by coop, or experiential learning interspersed with seminars. Perhaps bring back the notion of 3-year degrees targeted at specific sub-domains of computing. The reality is that you don’t need a 5-year degree to be successful in computing.