# A survey for computer science students

Dr. Judi McCuaig invites you to participate in a research survey about the workload of Undergraduate students at the University of Guelph.    The results from this survey will help researchers understand the responsibilities and time commitments faced by students.  That understanding can be used to inform the design of courses and programs to help ensure students can be successful.

If you choose to complete the survey, you will be asked to answer some questions about your experiences with the workload of undergraduate students.  You can skip any questions you would rather not answer.  Please be assured that your identity will be kept completely confidential.    The anonymized results of this survey may be used in proposals for research funding and in research publications.

The study should take you 10 minutes or less to complete.   Your participation in this research is voluntary and you have the right to withdraw at any point during the study, for any reason, and without any prejudice.  If you would like to contact the Principal Investigator to discuss this research, please e-mail Dr. Judi McCuaig <judi@uoguelph.ca>.

The survey can be found here https://uoguelph.eu.qualtrics.com/jfe/form/SV_736asdXcy9h1xA1

Advertisements

# The problem with Pi (ii)

To print out precise numbers, we need another approach. One such approach to π was offered in 1990, when  Rabinowitz and Wagon discovered a “spigot” algorithm for π [1]. This algorithm allows for successive digits of π to be calculated using a simple recursive algorithm.

Here is the Pascal program from the original paper:

```program pi_spigot;
const n = 1000;
len = 10*n div 3;
var i, j, k, q, x, nines, predigit : integer;
a : array[1..len] of longint;
begin
for j := 1 to len do a[j] := 2;
nines := 0;
predigit := 0;
for j := 1 to n do
begin
q := 0;
for i := len downto 1 do
begin
x := 10*a[i] + q*i;
a[i] := x mod (2*i-1);
q := x div (2*i-1);
end;
a[1] := q mod 10;
q := q div 10;
if q = 9 then nines := nines + 1
else if q = 10 then
begin
write(predigit+1);
for k := 1 to nines do write(0);
predigit := 0;
nines := 0;
end
else begin
write(predigit);
predigit := q;
if nines <> 0 then
begin
for k := 1 to nines do write(9);
nines := 0;
end
end
end;
writeln(predigit);
end.
```

The reader is left to browse the original paper for algorithm details. When running the code (I used the Free Pascal Compiler), it gives the following output:

03141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923459099207118856806516745194252854340964271825877594867237227818052636476240001253966769588037865014608282022810293731532093125719608182508805545474738567199841858637460939653266515205973230200470327367748014549418069976867613243271975438847522856914469354541235658607836752109748167637744042444357591344102954987267348532155345226074874607261272294756681440117171146164262329464280245194165531036682264959997624747949649719425809626723943536390912536760246664166117527686634968595119533790744923019073345333321686533168484290724097901531077274052175927055788110268222469183375946146833893295768948223454910618813165693638675020776147309271039650546608291414446931319129743860775232869818000001373465107828774083538328579350606183962012466

Only the first 265 digits are correct. This is probably due to overflow errors due to datatypes being too small. Convert the program to C, and it works as it should:

```
#include <stdio.h>
#include <stdlib.h>

void spigot();

int main(void) {
spigot();
return 0;
}

void spigot() {
int i, j, k, q, x;
int len, nines=0, predigit=0;
int N=1000;

len = (10*N/3)+1;
int a[len];

// Initialize A to (2,2,2,2,2,...,2)
for (i=0; i<len; i=i+1)
a[i] = 2;

// Repeat n times
for (j=1; j<=1000; j=j+1) {
q = 0;
for (i=len; i>0; i=i-1) {
x = 10 * a[i-1] + q*i;
a[i-1] = x % (2*i-1);
q = x / (2*i-1);
}
a[0] = q % 10;
q = q / 10;
if (q == 9)
nines = nines + 1;
else if (q == 10) {
printf("%d", predigit+1);
for (k=0; k<nines; k=k+1)
printf("%d",0);
predigit = 0;
nines = 0;
}
else {
printf("%d", predigit);
predigit = q;
if (nines != 0) {
for (k=0; k<nines; k=k+1)
printf("%d",9);
nines = 0;
}
}
}
printf("%d\n", predigit);
}
```

[1] ] Rabinowitz, S.D., Wagon, S., ” A spigot algorithm for pi”. American Mathematical Monthly, 102, pp.195–203 (1995)

# Parallelism in Julia (ii) – image processing

How can we expand parallel programming  to something more practical? What about some image processing, where the operations on a pixel are independent? For example applying some form of LUT (look-up-table), to each pixel, or even performing some neighborhood filtering? Let’s consider the task of median filtering of a grayscale image.  Here is a serial function to perform the task:

```function serial_median(img)
dx, dy = size(img)
imgOut = zeros(dx,dy)
for i=2:dx-1, j=2:dy-1
block = img[i-1:i+1,j-1:j+1]
newPixel = median(block)
imgOut[i,j] = trunc(UInt8,newPixel)
end
return imgOut
end
```

Nothing inherently special about this. Here is the parallel version which uses parallel loops:

```function parallel_median(img)
dx, dy = size(img)
imgOut = zeros(dx,dy)
sum = 0
@parallel for i = 2:dx-1
for j = 2:dy-1
block = img[i-1:i+1,j-1:j+1]
newPixel = median(block)
imgOut[i,j] = trunc(UInt8,newPixel)
end
end
return imgOut
end
```

This effectively parallelized the rows of the image,

Now let’s test these algorithms. First let’s apply it to a small 1007×788 image (793,156 pixels). Using the @time, protocol, it is possible to time each of the functions.

Parallel:  1.223862 seconds (2.15 M allocations: 101.490 MB, 5.29% gc time)
Serial:  0.262374 seconds (3.17 M allocations: 223.417 MB, 6.80% gc time)

In this first case the parallel version actually took longer than the serial version. Why? This is largely because the data set was likely too small for the parallelization to make a large impact.

Now let’s try the algorithms using a larger dataset, an image that is 6640×2144 (14,236,160) pixels. Now the serial version is 4 times slower than the parallel version.

Parallel:  1.557181 seconds (2.15 M allocations: 203.976 MB)
Serial:  5.999110 seconds (56.88 M allocations: 3.920 GB, 6.64% gc time)

A further example parallelizes both loops in the nested loop:

```function parallelNmedian(img)
dx, dy = size(img)
imgOut = zeros(dx,dy)

sum = 0
@parallel for i=2:dx-1 @parallel for j=2:dy-1
block = img[i-1:i+1,j-1:j+1]
newPixel = median(block)
imgOut[i,j] = trunc(UInt8,newPixel)
end
end
return imgOut
end
```

It leans to a slight decrease in runtime for the parallel algorithm, at 1.49 seconds.

# The problem with Pi (i)

Well, it’s not specifically a problem with π. But if we wanted to calculate π to a bunch of decimal places, and print them out, in most programming languages it is challenging. That is partially due to the limitations of data types. Take for example the recursive relation of French mathematician Vieta (1540-1603).

It’s easy to use this in a recursive function to calculate π to a large number of decimal places.

If we look at a C program to calculate this using a double, we end up with a result of the form:

```Calc π   = 3.1415926535897940041763832
Actual π = 3.1415926535897932384626433832795```

Not ideal, but it is calculated to the precision extent of a double.

First let’s look at the iterative version in C:

```#include <stdio.h>
#include <math.h>

int main(void)
{
double prod, r;
int i;
prod = 1.0;
r = 0.0;

for (i=1; i<=10; i=i+1) {
r = sqrt(2.0 + r);
prod = prod * (0.5*r);
}

printf("pi is approximately %.25lf\n", 2.0/prod);

return 0;
}
```

It’s easy to use this in a recursive function to calculate π as well:

```#include <stdio.h>
#include <math.h>

double recursivePI(int n, double r)
{
double ra;

if (n == 1)
return sqrt(2.0 + r) * 0.5;
else {
ra = sqrt(2.0 + r);
return (ra * 0.5) * recursivePI(n-1,ra);
}
}

int main(void)
{
double prod, r;
r = 0.0;

prod = recursivePI(50,r);
printf("pi is approximately %.25lf\n", 2.0/prod);

return 0;
}
```

# Building a visualization of a sunflower spiral (ii)

So, Processing seems to be the most relevant language to code the sunflower spiral in. The calculation uses the Golden Angle (137.5°), which can be calculated in radians as follows:

`angle = PI * (3.0 - sqrt(5.0))`

Here is the code in Processing:

```public void setup() {
size(600,600);
}

public void draw() {
background(255);
translate(width*0.5, height*0.5);
stroke(0);
float angle = PI*(3.0-sqrt(5.0)); //137.5 in radians
float r, theta, x, y;
int n = 250;

for (int i=1; i < n; i=i+1) {
r = i;
theta = i * angle;
// Convert polar to cartesian
x = r * cos(theta);
y = r * sin(theta);
ellipse(x, y, 5, 5);
}
noLoop();
}
```

This code draws 250 points in the spiral, performing the calculations in polar coordinates, and then converting them before drawing them using the ellipse() function, using a 5×5 circle. Instead of calculating the radius as r=c√i, we chose to use r=i, as this provides for a better visualization (technically an Archimedean spiral, r(k) = ck, with c=1). Now this just provides a representation of what a sunflower spiral would look like.

The thing I especially like about Processing is that by adding this line of code at the end (after noLoop()), I can save the visualization to file without any hassle:

```saveFrame("sunflower250.png");
```

Here is the spiral:

If we modify the radius calculation to r = 5*sqrt(i), we get a much tighter configuration, and something which better represents a sunflower. The result is the canonical Vogel spiral.

Change the configuration to n=500 points gives:

It is of course easy to experiment with different spirals, just by changing the equation for r. For example  r = pow(i,0.8), is a form of parabolic spiral.

# Building a visualization of a sunflower spiral (i)

Sometimes you want to build a picture to illustrate something, but not necessarily do it by hand. The answer is to write an algorithm to do it, and code it in some visualization happy language… and there aren’t that many really. But using Processing works quite well. Imagine if we wanted to draw a sunflower spiral. What is a sunflower spiral you may ask? Okay, some background first.

Sunflower heads are interesting because of Fibonacci sequences. Fibonacci sequences appear quite commonly in nature, as two consecutive Fibonacci numbers, especially in plant life. Examples include the arrangement of leaves on a stem, the fruitlets of a pineapple, scales on a pine cone, artichoke flowers, fern fronds, or plant seed heads. This is an aspect of plant form known as phyllotaxis (the botanical study of the arrangement of phylla (leaves, petals, seeds, etc.) on plants)The seeds on a seed head are often distributed in the head in two distinct sets of spirals which radiate from the centre of the head to the outermost edge in clockwise and counterclockwise directions. The spirals are logarithmic in character. The number set exhibited by the double set of spirals is intimately bound up with Fibonacci numbers. It appears that the reason for this formation is to allow the seedheads to pack the maximum number of seed in the given area. The most perfect example of phyllotaxis are afforded by the common sunflower (Helianthus annuus,L.) [1]. The head of the sunflower which is approximately 3-5 inches across the disk will have exactly 34 long and 55 short curves. A larger head 5-6 inches in diameter will show 55 long curves crossing 89 shorter ones.

Fig 1: Sunflower head showing seeds [2]

Consider the image shown in Fig. 1, depicting a sunflower head. The shallower spiral of this sunflower repeats 34 times, whilst the steeper spiral repeats 55 times.

So to draw this we have to produce a spiral, which is sometimes called a Fibonacci spiral. A very simple model of the patterns in a sunflower seed head was developed by Helmut Vogel in 1979 [3]. He calculated the position of the seeds (k) in the head using polar coordinates (r, δ):

r(k) = c√k

θ(k) = kδ

where δ is the golden angle, 137.5°, and c is a constant scaling factor (some algorithms replace c√k with k). As the value of k increases, the position rotates through the golden angle, and the radius increases as the square root of k. This allows use to draw a sunflower head. This spiral is essentially identical to the orbit of a “nonrelativistic charged particle in a cyclotron” [3].

[1] Sutton, C, “Sunflower Spirals Obey Laws of Mathematics.” New Scientist , 18 April 1992, p. 16.

[2] Church, A.H., On the Relation of Phyllotaxis to Mechanical Laws, Williams and Norgate, London.Plate VII (1904)

[3] Vogel, H., “A better way to construct the sunflower head”, Mathematical Biosciences 44, pp.179–189 (1979).

# Using the “Processing” language

Today I stumbled onto a language called Processing. It seems to be an OO language similar to Java in construct (but with a simplified syntax), coupled with an IDE – built for media art and visual design. It appeared in 2001, so it has been around for a while, just flying under the radar I imagine. I downloaded the OSX IDE and it ran without fuss, which was nice. It took a couple of hours to get the hang of how it works, but there seems to be a good amount of functionality. I found Processing because I was looking for a language to implement visual recursion algorithms, such as Hilbert curves. Due to the fact that I do like Julia, I tried to install Luxor, a Julia package that does vector drawing, but the install was *horrible*, and it still didn’t run (this is always my fear when people create packages – they become so overwhelm, I fear installing them). I’ve never like Java, but I don’t mind the language structure of Processing. Below is a program to generate a Hilbert curve.

```float turtleangle = 0;
float cx;
float cy;
float length;

void setup() {
size(400, 400);
cx = width/2;
cy = height/2;
}

void draw() {
length = 10;
background(255);
stroke(0);
hilbert(4,90);
noLoop();
}

void hilbert(int level, float angle) {
if (level == 0)
return;
rotation(angle);
hilbert(level-1,-angle);
forward(length);
rotation(-angle);
hilbert(level-1, angle);
forward(length);
hilbert(level-1, angle);
rotation(-angle);
forward(length);
hilbert(level-1,-angle);
rotation(angle);
}

void forward(float amount) {
float newX = cx + cos(radians(turtleangle)) * amount;
float newY = cy + sin(radians(turtleangle)) * amount;

line(cx, cy, newX, newY);
fill(0);
cx = newX;
cy = newY;
}

void rotation(float degrees) {
turtleangle = turtleangle + degrees;
}
```

The system works by applying the setup() function (sets up the drawing board, in this case a 400×400 board), and draw() functions, which are the standard functions. The function hilbert() generates the curve, using forward() to move a certain distance from a point, and rotation() to rotate a certain amount. The only real problem I had was realizing that draw() needs a call to noLoop(), otherwise it continues running in a loop. Here is the output:

Overall, the experience of programming in Processing was quite good. The only problem is that using Turtle Graphics is super easy, and Processing requires you to write functions to perform some of the moving operations. Next I’m going to try and build something a little more complicated: a sunflower spiral.

P.S. I don’t really like the name. I *get* where they were going, but the term processing is far too generic to use as a programming name (it’s also hard to Google stuff).

# TTC digital transfers – Is there something wrong with their Presto algorithm?

So I travel quite a bit using my Presto card, on a number of different transit systems, so I have a good understanding of both usability of the card reader devices, and how things work. There is no doubt that the TTC Presto system is inherently more complex than many of the others because there are trains, streetcars and buses to take into consideration. And transfers – and here is where I think things don’t work so well (and possibly why the TTC is loosing money).

Now I used all three methods of transport on this journey, and the printout below from my Presto account shows the transactions. Before I continue with the discussion, here are two pertinent pieces of information:

1. The TTC website says you may not take “stop-overs”, i.e. “you may not spend time shopping or doing anything before you get on the next vehicle”.
2. The TTC board approved 2-hour transfers last year, but they have not been implemented..

Now let’s analyze this:

1. It says I got on at Finch Station at 12.59pm, and paid \$3. But I got on the No.11 bus near where I live, 12 kms from Finch Station. (problem but not the focus here).
2. At 1.22pm I transferred in Spadina Station to a streetcar (No.510). Transfer = \$0 (all good).
3. About 15 minutes later I got off the streetcar near King st.
4. Then at 2.06pm is says I transferred at location 15649 (King St West at Portland St East Side). Transfer = \$0.

If you get on the 510 streetcar at Spadina, it goes nowhere near stop No.15649, (they are 400m apart) so there is no way you can transfer between them.

So the problem here is that I spent nearly 30 minutes doing something then got back on a streetcar that was not in a linear path from the journey at (2). That and it contradicts the “you may not spend time doing anything” mantra. So the TTC looses a fare. Now at 3.28pm I got on at King Station, and paid the \$3 fare. But maybe about 3.44pm I got off at Davisville Station, got on again (at the first stop across the road from the station) at 4.44pm, and the same thing happened… an hour later (this time in a linear journey), I got a transfer, not a \$3 fare.

End result? I should have paid \$12 for fares, but only paid \$6. This would likely not have happened with paper transfers that were verified by the vehicle driver. This has also happened numerous times, so I have to question whether they are already running some sort of 1-hour transfer, *or* is it the fact that the transfer algorithm is so complex it doesn’t work as intended? Did anyone physically test the digital transfers, and I mean things that may have broken the system?

From the perspective of transfers, an ideal system would work based on positional knowledge, much like the distance-based system used on GO – you tap when you get on, you tap when you get off, and it intrinsically knows where you are and how much to charge you. So if you transfer from GO to MiWay it can make the connection (time based transfer discount). However the TTC use of Presto does not require a tap-off, so how does it know where or when you get off a vehicle? It doesn’t, and the cards have no locational system, so knowing where you are is impossible.
Having tap-on/tap-off makes the system function easily. Installing a 2-hour transfer is possible if the 2-hours is taken from tap-on time, and honestly it might be easier to run than the current system which relies on linear-path transfers.

# Parallelism in Julia (i) – parallel loops

One really nice thing about Julia is its ability to do parallelism. Intrinsically, i.e. no packages needed. For someone wishing to learn a bit of parallel coding, it’s an ideal environment. Julia provides a multiprocessing environment based on message passing to allow programs to run on multiple processors in shared or distributed memory.

One of the easiest methods is parallelizing loops. Note that although these loops *look* like normal loops, they behave completely differently. The most interesting part is that iterations do not happen in any specific order, and hence writes to variables or arrays are not visible from a global perspective – iterations run on different processes. Some of these limitations can be overcome with the use of shared arrays. Parallel loops use any number of processors. Here is a simple case, where we are simulating the number of times a coin toss ends in a tail:

```ntails = @parallel (+) for i=1:2000000
rand(Bool)
end
```

This specifies a parallel loop where iterations of the loop are assigned to multiple processors. They are combined using a specified reduction, in this case (+). The reduction operator can also be omitted if the logic does not need it.  This is of course a trivial example, but it shows how the parallel loop can be used.

But parallel does not always mean faster. Here Julia nicely provides the @time macro. This macro executes an expression, and prints the time it takes to execute, the number of allocations, and the total number of bytes the execution causes to be allocated (before it returns the value of the expression). Parallel loops can also be resource hungry, i.e. there is no free lunch. For example, let’s look at a simple function which sums the value 1, n number of times. Here is the serial version:

```function serialSum(n)
sum = 0
for i = 1:n
sum = sum + 1
end
return sum
end
```

Nothing inherently challenging about that piece of code. Here is the code parallelized by using a parallel loop:

```function parallelSum(n)
sum = 0
sum = @parallel (+) for i=1:n
1
end
return sum
end
```

The variable sum is assigned the result of all the sums. The use of @parallel, implies the loop is a parallel loop, the (+) implies the results of the parallel run are summed.

What happens when we run the code with N=1,000,000?

```parallel 0.288343 seconds (231.73 k allocations: 10.216 MB)

serial 0.003774 seconds (1.37 k allocations: 64.265 KB)```

Even with a billion, the parallel algorithm works 0.287 seconds, and the serial one at 0.004 seconds. So in this case the use of a parallel loop makes no sense. Apart from time, it also uses way more allocations, and an incredible amount of bytes more. Now it’s also possible to use a parallel nested loop. Here is the same concept extended to the realm of nested loops:

```function serialSum(n)
sum = 0
for i = 1:n, j = 1:n
sum = sum + 1
end
return sum
end
```

And here we have parallelized both loops in the nested loop:

```function parallelSum(n)
sum = 0
sum = @parallel (+) for i=1:n @parallel (+) for j=1:n
1
end
end
return sum
end
```

What happens when they run? with n=1,000,000?

```parallel 12.503481 seconds (47.27 M allocations: 2.440 GB, 2.52% gc time)

serial 0.012687 seconds (2.58 k allocations: 116.727 KB)```

Not exactly inspiring. Notice that the parallel version now has something called gc time associated with is, a whole 2.52%. What is this? Julia is a language which performs garbage collection. This means that Julia keeps track of current allocations, and frees the memory if it isn’t needed anymore. It doesn’t do this consistently though.

Can we use this to make image processing faster? Check the next blog post.

# Famous coding blunders

What are the biggest blunders in programming lore?

### Mars Climate Orbiter

The Mars Climate Orbiter (MCO) was launched on December 11, 1998 atop a Delta I launch vehicle. In September 1999 it was to achieve an elliptical orbit around Mars, where it would skim through the upper atmosphere for several weeks to move into a low circular orbit. On September 23, 1999 the MCO was lost when it entered the Martian atmosphere in a lower than expected trajectory.

The Mars Climate Orbiter was destroyed by a navigational error which caused the spacecraft to miss its intended 140–150 km altitude above Mars during orbit insertion, instead entering the Martian atmosphere at about 57 km. 85 km was the minimum survivable altitude. The spacecraft would have been destroyed by atmospheric stresses and friction at this low altitude.

The small forces ΔV for use in orbit determination solutions were low by a factor of 4.45 because data was delivered in lb-sec instead of Newton-sec. One team of engineers working on the orbiter had used metric units while another team had used English units. They had failed to convert English measures of rocket thrust to Newton, a metric system measuring rocket force:1 pound force = 4.45 newtons.

The resulting loss = US\$327.6 million

### USS Yorktown

The USS Yorktown (DDG-48/CG-48) was a Ticonderoga-class cruiser in the United States Navy from 1984 to 2004. From 1996 the USS Yorktown was used as the test bed for the Navy’s Smart Ship program. The ship was equipped with a network of 27 dual 200 MHz Pentium Pro based machines running Windows NT 4.0 communicating over fiber-optical cable with a Pentium Pro based server. This network was responsible for running the integrated control center on the bridge, monitoring condition assessment, damage control, machinery control and fuel control, monitoring the engines and navigating the ship. This system was estimated to save \$2.8 million per year by reducing the ships complement by 10%.

In September 21, 1997 while on maneuvers off the coast of Cape Charles, Virginia, a crew member entered a zero into the data field of a database, which caused a divide-by-zero exception in the ships Remote Data Base Manager which brought down all the machines on the network, causing the ships propulsion system to fail. As a result the USS Yorktown was dead in the water for just under three hours, and had to be towed back to Norfolk naval base.

The resulting loss = US\$?

### Mars Polar Lander

The Mars Polar Lander (MPL) was part of the Mars Surveyor ’98 program in combination with the MCO. Mars Polar Lander and the attached Deep Space 2 probes were launched on a Delta 7425. After an 11-month hyperbolic transfer cruise, the Mars Polar Lander reached Mars on 3 December 1999. The most likely cause of the failure of the mission was a software error that mistakenly identified the vibration caused by the deployment of the landers legs as being caused by the vehicle touching down on the Martian surface, resulting in the vehicle’s descent engines being cut off whilst it was still 40 meters above the surface, rather than on touchdown as planned.

The resulting loss = US\$110 million

### Milstar

Milstar is a United States government satellite communications system that provides secure, jam resistant, worldwide communications to meet wartime requirements for United States military users. The first Milstar satellite was launched 7 February 1994 aboard a Titan IV expendable launch vehicle. The launch of the third satellite on 30 April 1999 failed to place the satellite in geosynchronous orbit. The Titan IV was equipped with a Centaur Upper Stage which carries its own guidance, navigation and control systems, which measure position and velocity throughout the flight. It determines the desired orientation of the vehicle in terms of pitch, roll and yaw axis vectors. It then issues commands to orient the vehicle in the proper attitude and position using the Reaction Control System.

The roll rate filter constant entered in the Inertial Measurement System was incorrect:  The value should have been entered as –1.992476, but was entered as –0.1992476. The incorrect roll rate filter constant zeroed any roll rate  data, resulting in the loss of  roll axis control, which then caused loss of yaw and pitch control. The vehicle began experiencing instability about the roll axis during the first burn.  That instability was greatly magnified during Centaur’s second main engine burn, coupling each time into yaw and pitch, and resulting in uncontrolled vehicle tumbling.

The Centaur attempted to compensate for those attitude errors by using its Reaction Control System, which ultimately depleted available propellant during the transfer orbit coast phase. The third engine burn terminated early due to the tumbling vehicle motion.  As a result of the anomalous events, the Milstar satellite was placed in a low elliptical final orbit, as opposed to the intended geosynchronous orbit.

The resulting loss = US\$800 million (satellite), and US\$433 million (launcher).

### Patriot System

The Patriot missile defense system used during the Gulf War was also rendered ineffective due to roundoff error [1]. The system used an integer timing register which was incremented at intervals of 0.1 s. However, the integers were converted to decimal numbers by multiplying by the binary  approximation of 0.1:

As a result, after 100 hours (3.6 × 106 ticks), an error of

had accumulated. This error caused the Patriot system to continuously recycle itself instead of properly targeting.

The resulting loss = US\$?

### Sleipner A

Condeep (abbr. concrete deep water structure) refers to a make of gravity base structure for oil platforms developed and fabricated by Norwegian Contractors in Stavanger, Norway. A Condeep usually consists of a base of concrete oil storage tanks from which one, three or four concrete shafts rise.

The Sleipner A platform produces oil and gas in the North Sea and is supported on the seabed at a water depth of 82 m. It is a Condeep type platform with a concrete gravity base structure consisting of 24 cells and with a total base area of 16000 m2. Four cells are elongated to shafts supporting the platform deck. The first concrete base structure for Sleipner A sprang a leak and sank under a controlled ballasting operation during reparation for deck mating in Gandsfjorden outside Stavanger, Norway on 23 August 1991.

The post accident investigation traced the error to inaccurate finite element approximation of the linear elastic model of the tricell (using the popular finite element program NASTRAN). The shear stresses were underestimated by 47%, leading to insufficient design. In particular, certain concrete walls were not thick enough. More careful finite element analysis, made after the accident, predicted that failure would occur with this design at a depth of 62m, which matches well with the actual occurrence at 65m.

The resulting loss = US\$700 million,  a seismic event registering 3.0 on the Richter scale, and left nothing but a pile of debris at 220m of depth.

[1] Skeel, R., “Roundoff error and the Patriot missile”, SIAM News, 1992, Vol.25, pp.11.