# Timing a Fortran function

How to time a Fortran function? Here’s a Fortran program which times an execution of Ackermann’s function. It uses two calls to Fortran’s cpu_time() function. The result is in seconds.

Easy right? (Way easier than C).

```program runtime
implicit none

integer :: m, n, ans
real :: startT, endT, execTime

write(*,*) 'Recursive Ackermann in Fortran'
write(*,*) 'Enter First Value (m) and Second Value (n)'

call cpu_time(startT)
ans = ackermann(m,n)
call cpu_time(endT)
execTime = (endT - startT)

write(*,*) 'The value is: ', ans
write(*,*) 'Elapsed time in seconds: ', execTime

contains

recursive function ackermann(m,n) result(a)

integer :: m,n
integer :: a

if (m==0) then
a = n + 1
else if (n==0) then
a = ackermann(m-1, 1)
else
a = ackermann(m-1, ackermann(m, n-1))
end if

end function ackermann

end program runtime```

Here’s a sample output:

```Recursive Ackermann in Fortran
Enter First Value (m) and Second Value (n)
3 2
The value is: 29
Elapsed time in seconds: 8.00006092E-06```

Here’s another run:

```Recursive Ackermann in Fortran
Enter First Value (m) and Second Value (n)
4 1
The value is: 65533
Elapsed time in seconds: 20.9853764```

# Julia and nested functions

A while back I discussed nested functions as they relate to Pascal. Well C never really took to having nested functions… but Julia does. Why are they important? If I want to modularize code while at the same time hiding its implementation. Most people don’t know much about nested functions because they really disappeared with C, and have only made a reappearance with Julia.

There are two main reasons for nesting functions. Firstly they allow for functional decomposition, without over-polluting namespaces. A single outwardly visible function can be defined that implements some algorithm, but relies on internally defined functions to decompose the algorithm into smaller parts. Secondly it simplifies parameter passing. A nested function has access to all the parameters and all of the variables in the scope of the wrapper function, so the calling function does not have to explicitly pass a heap of local state into the nested function. Makes sense right?

And it’s easy to do in Julia. Consider a set of two functions factorial() and fact()fact() is nested within factorial() (this is the same structure that I discussed previously using Pascal).

```function factorial(x)

function fact()
if (n == 0)
return 1
else
n = n - 1
return (n+1) * fact()
end
end

n = x
return fact()
end

f = factorial(5)
println("factorial = ", f)```

# Dynamic strings in Fortran

So dynamic strings in Fortran are a little different. Don’t forget, there are character strings and character arrays. They are different, and so have to be treated different when it comes to dynamic memory, and dynamic strings can be tricky. Issue #1: dynamic strings cannot be read directly using  read. So a buffer i needed.

``` character (len=256) :: buffer
character (len=:), allocatable :: str```

The first declaration creates a buffer string with a max of 256 characters. The second declaration creates an allocatable dynamic string. Now we can read an ASCII string into buffer in the following manner:

`read(*,'(A)') buffer`

Now we can assign this to the dynamic string, with the system automatically reserving memory internally.  Two string handling functions are used to process the sting: adjustl() left-justifies the text and removes leading spaces, and trim() removes trailing spaces.

`str = trim(adjustl(buffer))`

Now to add to that string is a case of using the Fortran concatenation operator: //. Concatenation is performed by placing one string after another. Here’s an example:

`str = str // " " // trim(adjustl(buffer))`

This also adds a space between strings. Note that deferred-length strings occur in Fortran 2003 onwards, so compile using gfortran -std=f2003.

How does one time a function in Ada?

Below is a simple program which tests the run time of a recursive Fibonacci function. It uses the ada package Ada.Calendar, which is able to return time stamps. The variables startTime, and endTime are capable of holding a Time value. A call to Clock(), reads the machine internal clock, and stores the current time of day in the time variable. The difference is stored in a variable of type Duration, expressed in seconds.

Note also the put_line statements which make use of the Image criteria – X’Image(Y) is an Ada attribute where X is any discrete type and Y is an instance of that type. This attribute returns a string representation of the value passed as input.

And if you want to time another language check out the Wikipedia site on system timing.

```with Ada.Text_IO; use Ada.Text_IO;

procedure testFibonacci is

function fibonacci(n : in integer) return integer is
begin
if n = 0 then
return 0;
elsif n = 1 then
return 1;
else
return fibonacci(n-1) + fibonacci(n-2);
end if;
end fibonacci;

n : integer;
startTime, endTime : Time;
milliS : Duration;
result : integer;

begin
put_line("Fibonacci number: ");
get(n);
startTime := Clock;
result := fibonacci(n);
endTime := Clock;
milliS := (endTime - startTime) * 1000;

put_line("Result: " & Integer'Image(result));
put_line("Runtime = " & Duration'Image(milliS) & " milliseconds.");

end testFibonacci;```

# Tales of cryptic Cobol: strings

One of the more interesting things about Cobol is the way it deals with strings. Consider the following declaration is Cobol:

```01 input-area.
02 line1 pic x occurs 80 times.```

If somewhere in the code, there is a motion to read a file:

```read input-file into input-area
at end move zero to eof-switch

This will read a line of the file into input-area, which is technically a string. However you can’t access it element-wise, for that you need it’s sub-variable line1. And you can’t use line1 in functions that want to evaluate the whole string, for that you need input-area. Maybe the best of both worlds? This means that given the following declarations:

```77 countTrailspc   pic 99.
77 linelength      pic 99.```

We can build a paragraph that counts the length of a string, after stripping off trailing spaces. Something of the form:

```line-length.
move 0 to countTrailspc.
inspect function reverse(input-area) tallying countTrailspc for leading space.
subtract countTrailspc from length of input-area giving linelength.```

At the end the variable linelength will contain the length of the line.

# Plotting image histograms with Julia

So, I have spent the morning working on some histogram plotting routines in Julia – useful if you want to visualize an image histogram, or save it for later use in file format. Now Julia does not come with any native plotting libraries, but there are the likes of Gadfly, and PyPlot. For todays activity I chose PyPlot, which effectively uses the Julia PyCall package to call Python’s matplotlib directly from Julia (it’s not all a bed of roses though – you do have to make some syntax changes).

Firstly I wanted a function to derive a simple histogram, in the form of a curve. The function plotHist()  takes an image (grayscale) histogram vector as input, and plots the visual histogram to file: histogram.png. This function also has the alternative of plotting to a console window.

```function plotHist(hst, show=false)

PyPlot.plot(hst)
PyPlot.xlabel("Intensities")
PyPlot.xlim([0, 256])
PyPlot.yticks([])
if (show)
PyPlot.show()
end
PyPlot.savefig("histogram.png")
end```

Here is an example of the output produced:

Next, a function plotRGBhist()  to plot all three components of a colour RGB image in one histogram. This function takes a colour image as input, splits off each of the components, and finds the individual histograms (using getIMGhist() ), and then plots them by means of overlaying them, saving the resulting picture of the histogram in histogramRGB.png.

```function plotRGBhist(img)

# Convert image to vector
dx, dy, dz = size(img)
imgR = img[:,:,1]
imgG = img[:,:,2]
imgB = img[:,:,3]

hstR = getIMGhist(imgR)
hstG = getIMGhist(imgG)
hstB = getIMGhist(imgB)

PyPlot.plot(hstR, color="red", linewidth=2.0, label="Red")
PyPlot.plot(hstG, color="green", linewidth=2.0, label="Green")
PyPlot.plot(hstB, color="blue", linewidth=2.0, label="Blue")

PyPlot.xlabel("Intensities")
PyPlot.xlim([0, 256])
PyPlot.yticks([])
PyPlot.legend(fontsize=10)
PyPlot.savefig("histogramRGB.png")

end```

Here is a sample output:

Lastly, a function which takes an alternative approach to plotting the histogram of a grayscale image. In the function plotIMGhist(), a grayscale image is input, and the resulting histogram is in the form of a solid histogram, using vertical bars.

```function plotIMGhist(img)

# Convert image to vector
dx, dy = size(img)
imgV = vec(reshape(img,1,dx*dy))

common_params = Dict( :bins => 256, :range => (0, 255),
:normed => false, :color => "black")
PyPlot.yticks([])
PyPlot.xlabel("Intensities")
PyPlot.plt[:hist](imgV; common_params... );
PyPlot.savefig("histogramIMG.png")
end```

The image is first converted to a vector, and then applied using the matplotlib function hist(). Note that due to conflicts with Julia’s built-in function hist(), in order to use matplotlib.pyplot.hist, one has to use PyPlot.plt[:hist]. Also notice the use of the Dict expression for the parameters to hist.

Here is a sample:

# The usability of washing machine labels

Sometimes when we travel, there are labels on appliances that seem both familiar and weird to us. As is the case of this washing machine that has instructions in both Danish, and Swedish. The first weird work is likely “Slut”, which is generically Scandinavian for stop, or end. In this case, the Danish “Stopp” is more recognizable. Conversely, there is also a setting for Jeans (Swedish), that is labelled “Cowboytøj” in Danish, which literally translates to “cowboy clothes”. Not that weird considering when they were introduced in Denmark in the 1950s they were known as “cowboy busker”, or cowboy trousers. There are of course words that are easy to decipher: Silke (silk), Automatic, and Express. Could we create a series of universally recognizable symbols for use instead of words?

# UPPERCASE to lowercase with Cobol (oh yeah!)

What you say, why? Why not! It’s a good example of how easy it is to do this sort of processing in Cobol. This program prompts for the name of an ASCII file ( in uppercase) to be convert, and the name of the converted file. It then checks to see if the file exists, and if not, exits the program. IF the uppercase file does exist, it converts all UPPERCASE characters found to lowercase, line-by-line, using inspect, and then writes them to the output file.

```*> Program to convert a program from UPPERCASE to lowercase

identification division.
program-id. conv2lower.

environment division.
input-output section.
file-control.
select input-file assign to dynamic fname-uc
organization is line sequential
file status is upperFS.
select output-file assign to dynamic fname-lc
organization is line sequential
file status is lowerFS.

data division.
file section.
fd input-file.
01 sample-input pic x(80).
fd output-file.
01 output-line pic x(80).

working-storage section.
77 fname-uc      pic x(30).
77 fname-lc      pic x(30).
77 eof-switch    pic 9 value 1.
77 upperFS       pic xx.
77 lowerFS       pic xx.

01 input-area.
02 line1      pic x occurs 80 times.
01 output-area.
02 out-line   pic x(80).

procedure division.
display "Input uppercase filename? ".
accept fname-uc.

open input input-file.
if upperFS not = "00" then
if upperFS = "35" then
stop run
end-if
end-if.

display "Output lowercase filename? ".
accept fname-lc.

open output output-file.

at end move zero to eof-switch

perform proc-body
until eof-switch is equal to zero.

close input-file.
close output-file.

stop run.

proc-body.
inspect input-area converting "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
to "abcdefghijklmnopqrstuvwxyz".

move input-area to out-line.
write output-line from output-area.

at end move zero to eof-switch

# Binary images: dark objects on a light background?

So typically one would think that when creating a binary image through some process, for example thresholding, the object should be represented by white pixels, and the background by black pixels. But as is the case, we more often consider the object as being black on a white background. Why is this? Is it because we are use to reading black text on a white page? Is there some scientific explanation?

However, most studies have shown that dark characters on a light background are superior to light characters on a dark background (when the refresh rate is fairly high). For example, Bauer and Cavonius (1980) found that participants were 26% more accurate in reading text when they read it with dark characters on a light background. Could the same be true for binary images? From the algorithms perspective, it doesn’t care because it doesn’t need to visualize an image. However from a human perspective, dark objects look better on a light background.

Due to certain asymmetries in the human visual processing system, dark text on lighter background (“negative contrast”) also offers higher contrast sensitivity than light on dark (“positive contrast”). The disruptive tendency for white letters to spread out or “bleed” over a black background is called irradiation.

Reference: Bauer, D., & Cavonius, C., R. (1980). Improving the legibility of visual display units through contrast reversal. In E. Grandjean, E. Vigliani (Eds.), Ergonomic Aspects of Visual Display Terminals (pp. 137-142). London: Taylor & Francis

# How long should an identifier be?

One thing I still see a lot of in student programs is long identifiers. I mean really long. Early programs in languages such as Fortran often used simple 1-2 character identifiers. This was likely the case because memory was limited, so large descriptive names would take up more space. After this, many programming languages limited the length of identifiers to something like 32 characters.  Well technically they didn’t limit them, but only the first 32 characters were valid (i.e. it ignored the rest). The past 20-30 years have seen a steady increase in the allowable length of identifiers.

Now many languages allow an unlimited length identifiers. That’s a problem. Now consider creating an identifier for a variable that stores the “number of sentences” counted in a text analysis program. Here are some options:

```number_of_sentences
numberofsentences
NumberOfSentences
nosentences
no_sentences
numSentences
numSntncs
nSentence
nSent
...```

The possibilities seem endless don’t they?

While longer identifiers offer better comprehension, they also can become so long that they impede the readability of a program, and impact human memory resources. Imagine some extreme identifier such as  total_number_of_characters_in_a_word.

Part of creating a readable program involves the proper choice of identifiers, and their length does play a role in readability. Excess identifier length can impede both reading speed, and  program comprehension. Which of the following pieces of code to calculate compound interest has better readability?

```A = P*(1+(r/n))^(n*t)

amount = principalAmount * (1+(annualInterestRate/compoundedTimesPerYear))^(compoundedTimesPerYear*numberOfYears)

amount = prinAmnt * (1 + (intRate/timesPY))^(timesPY*nYears)```

So identifier length has to be a balance between being informative and being readable.

A study published by New et al. [1] looked at the effect of word length on visual word recognition (and there are a *lot* of published works on word length and what they term inhibitory trends). In their study they found average reaction times were lowest when words had a length of 7 characters, and the graph of words 3-13 characters had a U shape. Performance flagged for both uber-short and long words. Why? One of their conclusions is that words with a length of 6-9 characters have the highest chance of being processed with a single “fixation” (where your eye comes to rest when you read). Shorter words are skipped, and longer words require more than one “fixation”.

What does this mean? Choose your identifiers carefully.

[1] New, B., Ferrand, L., Pallier, C., Brysbaert, M., “Reexamining the word length effect in visual word recognition: New evidence from the English Lexicon Project”, Psychonomic Bulletin & Review, 13(1), pp.45-52 (2006).