Is legacy computing infrastructure to blame?

Recent outages by Southwest Airlines, and Delta foreshadow a future of dealing with legacy systems. Southwest’s issues were blamed on a faulty router, Delta’s issues were with a (supposed) power outage, so neither was directly related to their legacy systems. Hardware fails. It is not  omnipotent. I mean, my PVR fails routinely – we’re lucky if we get 3 years out of one before it starts to malfunction, and eventually die. But, it’s hard to blame Delta’s woes on a power-outage, because one would question why the airline did not have a back-up generator that seamlessly transitions over if power fails. Or maybe a redundant secondary site?

Most likely some form of human error caused the problem. I mean it has happened before. In 1997, a crew member on the U.S.S. Yorktown entered a 0 into a database field,  which caused a divide-by-zero in the ships Remote Data Base Manager, which resulted in a buffer overflow, which brought down the ships propulsion system = no power. Simple, yet effective. The question might be one of how fragmented the legacy systems are. How complex have they been allowed to grow? Keep adding new features to an existing system and over the years they tend to become somewhat brittle.  Complex, ill-tested code is often brittle, meaning the software may appear reliable but will fail badly when presented with unusual data. Brittleness in software can be caused by algorithms that do not work for the full range of input data.

How to prevent brittle software? Better testing, including worst-case scenarios. Better design of additions to legacy systems. And if the system becomes too complex, maybe it’s time to start thinking about re-building it from the ground-up.

 

On hiatus…

Well, it’s summer now, and I am on vacation until September… so the blog will be on hiatus. I may post the occasional thing, but I will be working on a series of blog posting on image processing for the fall/winter. I’m also working on a new blog, codebootcamp, which will be aimed at people wanting to learn how to program, from the basics. the examples will focus on programming in Julia.

So sit back and enjoy the summer. I have renovations to finish on my house, and two other blogs to write for: workingbyhand, and despitethesnow, so I won’t stop blogging, just concentrating more on other things.

have a great summer, and I’ll see you all in the fall!

Mike

Are computers morons?

Consider the following statement:

Every defined intellectual operation will be performed by a computer, faster, better, and more reliably than by a human being.” (Edmund C. Berkeley, 1980)

The computers of today are still high-speed morons. Some are capable of limited though, but only because they have been instructed to learn. If we write a program to teach a computer the characteristics of a human face found in an image, and we show it a multitude of examples, essentially training it, it will likely be able to track people on a security camera feed. The more examples of faces we give it, the better it will learn. There is essentially a “Where’s Waldo” of facial recognition, because the principles are the same. Computers are no doubt fast, but given sufficient time, humans are capable of solving most problems. In 1853, William Rutherford (1798-1871) calculated π to 440 digits. 147 years later, computers are capable of calculating the value of π to 5,000,000,000,000 decimal digits in a mere 90 days. In reality though, only 39 digits are needed to make a circle the size of an observable universe accurate to the size of a hydrogen atom, so are we designing such algorithms for the mere sake of computing?

Memory in C – an example

So what actually happens when memory is used in a program? The executable file that a C compiler creates contains the following information (say the generic a.out):

  • a.out magic number
  • other a.out contents
  • size needed for BSS (Better Save Space) segment
  • data segment (initialized global and static variables)
  • text segment (executable instructions)

Local variables don’t go into a.out, they are created at runtime, and stored in the stack. When an executable, such as a.out, is run, the parts of the program are put into memory. Memory space for items such as local variables, parameter passing in function calls, etc. is created in a stack segment, often called the runtime stack, and heap space is allocated for dynamically allocated memory. The stack provides a storage area for local variables. The stack stores the “housekeeping” information associated with function calls, known as a stack frame, and  works as a scratch-pad area for temporary storage.

On many systems the stack grows automatically as more space is needed. When all the space in the stack has been used up, stack overflow occurs. Consider the follow examples of variables declared in a function:

char str[]="Do or do not, there is no try!";
char *s="I am the master!";
int x;

In the stack region of memory, the following data is stored:

Name   Type                       Value
str    array of char, size=[31]   Do or do not, there is no try!\0
s      pointer (to char)          00E9
x      int

In the heap region of memory, the following data is stored:

Type             Address   Value
string literal   00E9      I am the master!\0

In the case of str, it has a size implicitly set at 31 (for the 30 characters enclosed in double quotes, and the \0 terminating character). str has memory reserved for it in the stack. The pointer variable s is declared to hold the address (00E9) of the first character of the string. The string is stored in the heap region of memory, not the stack.

 

The biggest problem with Python?

Python is a nice language whose biggest caveat may be its lack of speed. But there is a larger problem, and it has to do with the lack of natural arrays.

So technically, arrays are lists in Python. Same deal, different term. Kind of.

Python does not have a built-in array datatype – a Python list is an array of pointers to Python objects, a Numpy array is an array of uniform values. Also, 2D “arrays”, or lists in Python simply don’t work well. For example to create a list with 100 elements and set them to zero involves:

x = []
for i in range(0,100):
    x.append(0)

To create a 2D list involves putting a list in a list item – similar to the concept of an array-of-arrays.

img = []
for i in range(0,100):
    x = []
    for j in range(0,100):
        x.append(0)
    img.append(x)

Now accessing an element can be done using img[i][j]. This is just sheer nastiness, and the main reason to use Numpy arrays. The other reason is – efficiency. Because there are no intrinsic arrays in Python, is it more challenging to deal with data that relies on arrays.

 

 

Image Processing toolbox for Julia

I have been working diligently to build a toolbox for image processing in Julia. this toolbox is built from the ground-up, meaning there are no dependencies on other libraries (with the exception of Gadfly to visualize histograms). At the present time there are over 70 functions in the following areas, covering both grayscale and colour images;

  1. Image I/O: text image files, and PGM images
  2. Binarization: Local and global thresholding algorithms
  3. Colour spaces: Conversion from RGB to YIQ, HSV, YCbCr, and CIELab (and back)
  4. Segmentation: General segmentation algorithms, e.g. histogram back projection
  5. Edge processing: Edge enhancement and detection algorithms
  6. Morphology: A vast repoitoire of functions for morphological analysis
  7. Spatial transformation: Various geometrical algorithms, e.g. rotation, flipping
  8. Image sharpening: various unsharp masking filters
  9. Noise suppression: a series of varied filters to perform noise suppression
  10. Histogram functions: generate and manipulate histograms, eg. histogram equalization
  11. Noise generation: Functions to generate noise in images
  12. Skin: Skin segmentation algorithms

I hope to publish this toolkit in the fall, after some more testing and tweaking of the code.

Translating code to Julia

Translating code from another language to Julia isn’t hard, especially so if that language is Matlab or even Python. I have been translating code from Python, and the biggest issues I have found are with code having array starting values of 0, versus Julia’s 1, and some of Pythons more eclectic ways of expressing things. Similarly with C.

Consider the case of a point detector, a simple filter to find discontinuities in an image. It basically applies a mask to all the pixels in the image, and the result is an image where the edges have been enhanced. Here is what the mask looks like:

pointDmask

Here is an example of a piece of C code and its equivalent in Julia. Here is the code in C:

void PointDetector(struct image pI, struct image *pO)
{
  int x, y, i, j, sum;
  int mask[3][3] = {{-1,-1,-1}, {-1,8,-1}, {-1,-1,-1}};
 
  // Allocate space for the new image
  pO->pixel = malloc(pI.nrows*sizeof(int *));
  for (i=0; i<pI.nrows; i=i+1)
    pO->pixel[i] = (int *)malloc(pI.ncols*sizeof(int *)); 
  pO->nrows = pI.nrows; 
  pO->ncols = pI.ncols; 

  // Copy the pixels from the original image (edge pixels)
  for (x=0; x<pI.nrows; x++)
    for (y=0; y<pI.ncols; y++)
      pO->pixel[x][y] = pI.pixel[x][y]; 
 
  // Filter the image using the mask
  for (x=1; x<pI.nrows-1; x++)
    for (y=1; y<pI.ncols-1; y++){
      sum = 0;
      for (i=-1;i<=1;i++)
        for (j=-1;j<=1;j++)
          sum = sum + pI.pixel[x+i][y+j] * mask[i+1][j+1];
      if (sum > 255)
        sum = 255;
      else if (sum < 0)
        sum = 0;
      pO->pixel[x][y] = sum;
    }
}

Due to the potentially large size of an image, a dynamic array is used to store the image, contained within a struct that also holds values for the number of rows and columns in the image.

struct image
{
    int nrows;
    int ncols;
    int **pixel;
};

Now consider the function translated into Julia:

function pointDetector(img)

    dx,dy = size(img)
    mask = [-1 -1 -1; -1 8 -1; -1 -1 -1]
    imgP = copy(img)

    for i=2:dx-1, j=2:dy-1
        block = img[i-1:i+1,j-1:j+1]
        conv = block .* mask
        sumb = sum(conv)
        if sumb > 255
            sumb = 255
        elseif sumb < 0
            sumb = 0
        end
        imgP[i,j] = sumb
    end
    return imgP
end

The Julia function is much simpler. this stems partially from the simpler nature of the structure used to store the image (a simple array, with transparent storage), and partially from the lack of additional loops to deal with simple things like copying the image. The C function uses 7 for loops, the Julia function uses one (technically a nested loop). Copying the image is simpler, and does not require a structure which incorporate the dimensions of the array (again, transparent). Extraction of the “block” is achieved through array slicing, and convolution of the block with the mask is by means of element-wise multiplication. Finally, the sum uses the built-in function sum(). There is also no need to deal with dynamic arrays. I don’t have anything against dynamic arrays in C, but they do make code more “complex” looking, and they are a pain to debug. Also notice that depending on whether the struct image is passed by “value” (pI) or “reference” (pO), they uses different notation, “.” or “->” respectively (and this is just annoying).

So this translation was easy – not to say they all will be, but a good experience with less overhead.

And if you are wondering what the function actually does? Here is an example.

pointDeg

The image on the left is the original, the one on the right is after processing with the point detector algorithm.

 

Debunking TV technology: Reflections in the cornea

Same old TV adage, take a low resolution image, and extract a wonderfully enhanced image from a reflection in someones cornea. Easy to debunk right? Not so. It is possible to get an image from someone’s cornea, but not from a great distance, and there lies the debunk.

The process of extracting an image from a reflection in a persons eye is known as corneal imaging. CSI New York use it in one of their episodes (S1: Night, Mother). The team use footage from a CCTV camera, and extract an image from the reflection in the woman’s cornea. First the image is enlarged – “Magnification times 100, for starters.” Yeah okay – seriously?

Here’s the image, and the supposed enlargement. Besides the fact that the face is partially in shadow, it would be impossible to extract the image of the eye as shown due to the angle of the face.

CSI_enlargement

Consider a 3264×2448 pixel image. From that we extract a sub-image of the eye 120 × 104 (enlarged here otherwise it would be too small). Now enlarge that 100 times = 12,000 × 10,400 pixels = 124MP. The image has something in it, possibly some people, but it’s impossible to create information that wasn’t there in the first place. The image was taken at about a distance of 4-5 feet.

debunkCornea1

Garbage in – garbage out.

There are people doing work in this area, like the CAVE group from Columbia University. So it is possible to extract an image from the reflection in someones cornea, the only question remaining is how close do you have to be to acquire the image?

To determine this let’s do a couple of experiments. In the first experiment, I took an image of my right eye using my iPhone 5, with a 1.2MP “Facetime” camera. I then magnified the image two times. It is possible to see a reflection in the cornea, but the image is flipped, and there is not much detail. Processing this image further would not likely result in anything better. Note how close the image was taken as well.

debunkCornea2

The second experiment involved a 12MP digital SLR, which resulted in an image not much bigger than that from the iPhone, it part due to focusing constraints on the camera. At two times magnification, it is possible to make out certain objects in the image. So it might be possible to tell that there is a person an image, but identifying them through such an image is laughable, especially from a low-resolution security camera anywhere from 4-8 feet away.

debunkCornea3

The house of the future… in 1956

This is a follow on of an article I wrote earlier this year.

The article mentions houses coming in prefabricated sections, something which is becoming a growing trend. However it goes on to say that “inside walls probably will be movable”, and room sizes will be modified using a push-button control. Clearly this never eventuated, but there is a growing trend to design adaptable homes, some with walls which can be moved. There is also mention of a plastic house, being developed by MIT – the Monsanto House, which was made of fibreglass and resided at Disneyland from 1957-1967.

Monsanto1957

Interior climate control? We have that, I mean it isn’t perfect, even though certain “intelligent” thermostats claim to make life more comfortable for us. Solar heating systems for water have become common, but more so in regions that aren’t that cold in the winter. The article states that “smaller homes will combine living room, dining area and kitchen area in one space”, what we now call open concept, and certainly a reality, although the idea of smaller homes was a prediction that was way off. In 1950 the average size of a new single-family home in the US was 983 sq. ft. By 1990 it had ballooned to 2,080 sq. ft., and by 2010 somewhere near 2,400 sq. ft.

futureHouse

What about technology inside the home?

  • “3-D color TV wall panel”REALITY. What became a reality with flat-screen displays is now going further with ultra-thin technology such as LG, who have shown the potential of peel-able TV’s as thin as a DVD.
  • phono-vision device” – REALITY. Although the technology has morphed into devices such as “iPads”, which can be used anywhere. Dedicated home systems? Sooooo passe. 3-D TV’s? That technology came, and went.
  • microwave ovens” – REALITY. The technology made its residential debut in 1955. It only took 60 years to make it actually work the way we want it to (the Breville the Quick Touch actually melts chocolate properly). Microwave stoves as alluded to in the picture above? Not likely.
  • frozen food units” –  FICTION. Thank heavens. Whilst we do have frozen food (unfortunately), the idea of specialized units that store these meals never eventuated. Oh, well maybe they did – it’s called the FREEZER. Although thankfully they don’t have a “thawing” or a “heating” chamber to deliver “piping hot” meals. Can you imagine a hybrid freezer/microwave?
  • wall-mounted dishwasher” – FICTION. Why would anyone want this wall mounted? This was suppose to “scrape dishes” and “flush away garbage” – dishwashers have of course evolved, but most of use are lucky if we get clean dishes.
  • ultrasonic laundry” – REALITY. The idea of using ultrasound to wash clothes has finally come into fruition. There are larger systems, but one of the more interesting ones is Dolfi, a portable ultrasonic washing system due to be released this August. However the article mentions that the system of the future will also dry and iron the clothes. Combination washer-dryer systems exist, but one that irons the clothes as well?
  • push button refrigerator” – KIND-OF. Our refrigerators do “pop out ice cubes, crushed ice, and ice water”, but I don’t think any carbonate water, or thaw and dispense frozen food.
  • air blanket” – FICTION. A blanket of air that keep you warm in winter, cool in summer? Well, I mean climate control does that, and in summer we do run the Dyson fan at night, but a blanket of air? Seems weird.