# Thresholding algorithms: Niblack (local)

The localized thresholding algorithm attributed to Wayne Niblack appeared in his 1985 book on image processing [1]. The threshold is varied across the image as a function of the local intensity average and standard deviation.

#### Algorithm

The threshold value at each pixel is calculated as the sum of the mean and the standard deviation (times a constant, k) of the neighbourhood surrounding the pixel. It is based on applying the following formula to each pixel centred on a `n×n` neighbourhood surrounding the pixel to derive a local threshold.

```t = mN + (k × stdN)
```

Parameters:

• mN = mean of the neighbourhood
• stdN = standard deviation of the neighbourhood
• k = constant value (default = -0.2)

The default values are based on the experiments performed by Niblack [1]. The value of `w` used depends on the content of the particular image being binarized, i.e. chosen empirically.

#### Implementation (in Julia)

Here is the algorithm implement in Julia. Note that because this algorithm processes the neighbourhood surrounding a pixel, the image will need to be padded before the algorithm is applied (preferably with zeros) to allow edge pixels to be processed. Works on 8-bit grayscale images.

```function niblackTH(img, n=5, k=-0.2)

dx,dy = size(img)
imgSize = dx * dy
imgN = copy(img)

# Calculate the radius of the neighbourhood
w = div((n-1),2)

# Process the image
for i = w+1:dx-w, j = w+1:dy-w
# Extract the neighbourhood area
block = img[i-w:i+w, j-w:j+w]

# Calculate the mean and standard deviation of the neighbourhood region
wBmn = mean(block)
wBstd = std(block)

# Calculate the threshold value
wBTH = (wBmn + k * wBstd)

# Threshold the pixel
if (img[i,j] < wBTH)
imgN[i,j] = 0
else
imgN[i,j] = 255
end
end

return imgN
end
```

#### Testing

The algorithm is illustrated using the two images of text shown below. For both algorithms, the default values are used, and the size of the neighbourhood is `n=15`.

The results are not good. In Example 1 the background produces a lot of noise, which is largely because of discontinuities in the grayscale values, even though it looks quite smooth. A similar situation occurs in Example 2, with the thresholding clearly indicating the background texture attributable to the

#### Caveats

The algorithm tends to produce a lot of noise, particularly when the image background has a light texture.

#### REFS:

1. Niblack, W., An Introduction to Digital Image Processing, Prentice Hall pp.115-116 (1986)

# If UFOs exist, what do they code in?

Of late we have heard a lot about the potential existence of UFOs, and although we don’t truly know if aliens exist, there have been a number of reports of “unidentified aerial phenomena” that defy explanation. In all likelihood other life forms do exist in the universe somewhere, intelligent of not, depending on what your definition of intelligence is – I think there are many intelligent life forms on earth apart from humans, but likely few feel the same way. Scientists suggest there are 36 active communicating intelligent civilizations in the Milky Way galaxy. Of course, that’s not *really* a lot in a place that has 100-400 billion stars and at least that many planets and is approximately 105,700 light years in diameter. But if there are hundreds of billions of galaxies in the universe, that does turn out to be a few. Of course we have no idea what the aliens look like, how they live, how they evolved, or even what they think. Humans tend to make assumptions about everything.

If we work off what we do know from reports from from US Navy fighter pilots, radar operators and other witnesses, these objects are capable of accelerating from 0 to 5,800 km/hr in under a minute. If they are off-worldly visitors, then in all likelihood they have a better understanding of space than we do, and maybe even have technology like the oft-lauded warp engines of Star Trek. Of course we also presume they have technology like us, which may not be the case. We have to build things, but it is highly possible that they have mastered things like programmable matter.

They may in fact not code at all. They may have other means of making things function. They may have biological DNA-type programming systems, or plant-based computers? They may not have computers at all, because they can perform the processing themselves, integrating themselves into their bioships (like Moya from Farscape). What we can likely surmise is that they won’t code in any language we use. In fact, once they figure out what our software looks like they may not even bother with us.

# Thresholding algorithms: Phansalkar (local)

The localized thresholding algorithm of Phansalkar and colleagues [1] is very successful in separating background and foreground, particularly in low-contrast grayscale images. It employs a combination of the local mean, local standard deviation and parameters to constrain the calculation in certain conditions.

#### Algorithm

The algorithm is an extension of Sauvola’s algorithm [2] which is itself a modification of Niblack’s algorithm [3]. It is based on applying the following formula to each pixel centred on a `n×n` neighbourhood surrounding the pixel to derive a local threshold.

```t = mN * (1 + p * exp(-q * mN) + k * ((stdN / R) - 1))
```

Parameters:

• mN = mean of the neighbourhood
• stdN = standard deviation of the neighbourhood
• p = magnitude to which the exponential term affects the threshold – for low values of p (0<=p<=1) it behaves like Sauvola’s method. Too high (p>5) and too many background pixels might be classified as foreground pixels (default=3)
• q = treats the algorithm like Sauvola’s alg above certain values (default=10)
• k = parameter values in range 0.2..0.5 (default = 0.25)
• R = dynamic range of standard deviation (default=0.5)

The default values are based on the experiments performed by Phansalkar et al. [1]. The value of `w` used depends on the content of the particular image being binarized.

#### Implementation (in Julia)

Here is the algorithm implement in Julia. Note that because this algorithm processes the neighbourhood surrounding a pixel, the image will need to be padded before the algorithm is applied (preferably with zeros) to allow edge pixels to be processed. Works on 8-bit grayscale images.

```function phansalkarTH(img,n=5,p=3,q=10,k=0.25,R=0.5)

dx,dy = size(img)
imgN = copy(img)

# Calculate the radius of the neighbourhood
w = div((n-1),2)

R = R * 256

# Process the image
for i=w+1:dx-w, j=w+1:dy-w
# Extract the neighbourhood area
block = img[i-w:i+w,j-w:j+w]

# Calculate the mean and standard deviation of the block
mnB = mean(block)
sdB = std(block)

# Calculate the threshold value (Eq.8)
ph = p * exp(-q * mnB)
wBTH = round(Int16, (mnB * (1 + ph + k * ((sdB/R) - 1))))

# Threshold the pixel
if (img[i,j] < wBTH)
imgN[i,j] = 0
else
imgN[i,j] = 255
end
end

return imgN
end
```

#### Example Use

The example below applies a 15×15 neighbourhood (radius = 7).

```imgB = phansalkarTH(img,15)
```

#### Testing

The algorithm is illustrated using the two images of text shown below. For both algorithms, the default values are used, and the size of the neighbourhood is `n=15`.

The results are quite good. Even the faintest (handwritten) text in Example 1 has been extracted, whereas the handwriting in Example 2 has been extracted despite the discoloured paper in the background.

#### Caveats

The algorithm has multiple parameters, especially the size of the local neighbourhood, which have to be optimized to produce good results (although in all truthfulness, the majority of parameters can be left as defaults).

#### REFS:

1. Phansalkar, N., More, S., Sabale, A., Joshi, M. “Adaptive local thresholding for detection of nuclei in diversely stained cytology images”, in IEEE Conf. Communications and Signal Processing (2011)
2. Sauvola, J., Pietikainen, M., “Adaptive document image binarization”, Pattern Recognition, 33, pp.225-236 (2000)
3. Niblack, W., An Introduction to Digital Image Processing, Prentice Hall pp.115-116 (1986)

# Image processing algorithms in Julia

A few years ago I was working on an image processing toolbox for Julia. I wrote the code for quite a few algorithms, but then other things got in the way, and Julia moved on to V1.? and things had to be modified. I procrastinated, and somehow lost a bit of interest in computer vision in general, and so let the project languish. In the coming months I will be publishing these algorithms on the blog, and a list of them can be found on the Image Processing page. I will start with the classic image thresholding algorithms. Many of these algorithms are classic image processing fare – nothing fancy here, but they do provide a good understanding of how to process images in various ways. Most operate on grayscale images, with the odd exception.

The algorithms are implemented in Julia (V1.5+), and come with code, and any associated information, including some examples of how they function on real images. The implementations are very simplistic and make the assumption that the images being processed are in the form of Int16 arrays (the images generally are stored as text images, i.e. a matrix of integer values). Some may of course question why not use UInt8?… with the answer being that in Julia UInt8 is represented in hexadecimal, and I dislike working in hex (it’s extremely annoying). The code provided does not use any fancy processing like the vectorization found in Python. Images are processed using nested loops, and the code is self-explanatory enough for most people to follow. References made to equations are the equations found in the publication from which the algorithm emanates from. I don’t use any fancy OO structures, and I don’t store images as floating-point arrays. At this stage the algorithms are implement in Julia, because Julia provides a good backbone of existing things like statistical functions. They are relatively easy to translate to other languages.

Use the algorithms as you wish, just provide a link. At the end of each category of algorithms, I will create a module containing all the functions.

# What is the Shuttle sort?

A different form of exchange sort appeared in 1963 in the guise of ALGORITHM 175 – Shuttle Sort [1]. As is the case with most of these algorithms there is very little in the way of description, just basically the algorithm (reproduced below).

```procedure shuttle sort(m, Temporary, N);
value m; integer m; array N[1:m];
begin integer i,j;
for i := 1 step 1 until m-1 do
begin
for j := i step -1 until 1 do
begin
if N[j] ≤ N[j+1] then go to Test;
Exchange: Temporary := N[j]; N[j] := N[j+1];
N[j+1] := Temporary; end of j loop;
Test: end of i loop
end shuttle sort
```

While bubblesort typically bubbles the largest value up during each pass of the algorithm, Shuttlesort does the opposite, shuffling down the lowest value. Bubblesort starts with the whole array, and shuttlesort starts with comparing the first two elements, then the first three, etc. Below is the algorithm translated to Fortran:

```subroutine shuttlesort(m,N)
implicit none
integer, intent(in) :: m
integer, dimension(100), intent(inout) :: N
integer :: i,j

do i = 1,m-1
do j = i,1,-1
if (N(j) > N(j+1)) then
call swap(N(j),N(j+1))
end if
end do
end do
end subroutine shuttlesort
```

Here is an example of the algorithm working. The underlined portions of the list show the active elements at each iteration during a pass. The bold number are those shuttling down.

``````Original list:
65  54  43  32  21
Pass: 1
54  65  43  32  21
Pass: 2
54  43  65  32  21
43  54  65  32  21
Pass: 3
43  54  32  65  21
43  32  54  65  21
32  43  54  65  21
Pass: 4
32  43  54  21  65
32  43  21  54  65
32  21  43  54  65
21  32  43  54  65``````

Shortly after Schubert [2] verified that the algorithm worked, implemented in BALGOL. A comment (literally) was later made that there were errors in the algorithm in the form of a rearrangement in the order in which the comparisons are carried out [2]. This may only be because of Shaw and Trimble’s brief description as “exchanging out-of-order number pairs“. However due to a lack of description, it is hard to really know what Shaw and Trimble meant.

1. Shaw, C.J., Trimble, T.N., “Algorithm 175 Shuttle Sort”, CACM, 6(6), pp.312-313 (1963)
2. Schubert, G.R., “Certification of Algorithm 175 shuttle sort”, CACM, 6(10), p.619 (1963)
3. Juelich, O. “Remark on algorthm 175 shuttle sort”, CACM, 6(12) p.739 (1963)

# The origins of Bubble sort

The effect of the algorithm Bubblesort is to “bubble” the largest elements in a list of `n` elements to the end of the list and the smallest elements to the start of the list. The effect of the first pass is to “bubble” the largest value in the array into the element `a[n]`. In the second pass `a[n]` is not examined, so the smallest value is placed in `a[n-1]`. It takes `n-1` passes to put all the elements in order.

Below is an implementation of Bubblesort in Fortran.

```subroutine bubblesort(n,A)
integer, intent(in) :: n
integer, dimension(n), intent(inout) :: A
integer :: i,j
do i = n,1,-1
write(*,*) 'PASS',i
do j = 2,i
if (A(j-1) > A(j)) then
call swap(A(j-1), A(j))
end if
end do
end do
end subroutine bubblesort
```

The bubble sort probably had its origins in a paper published in 1956 by Friend entitled “Sorting on Electronic Computer Systems,” which uses the term “sorting by exchange” [1], which aptly describes this category of sorting algorithms. A very early book on programming, which appeared in 1959 [5] describes the idea of sorting by exchanging, and basically outlines the concept of the Bubblesort, and interestingly also suggests improvements such as making alternate passes in opposite directions. A similar term was used in a 1962 article [4].

There is reference to a “bubble sort” in a technical paper in 1959, comparing a binary sort to an “inefficient bubble sort method” [6], although it is unclear from where the term “bubble” appeared. The term bubble sort does not really appear in wider publication until Iverson’s 1962 book A Programming Language [2].

By the time Knuth described it in 1973 Bubblesort had become the generic term for sorting in which large element “bubble up” to their proper position [3]. This was in contrast to “sinking sort”, i.e. straight insertion. Knuth of course had very little positive to say about the algorithm [3]:

“In short, the bubble sort seems to have nothing to recommend it, except a catchy name and the fact that it leads to some interesting theoretical problems.”

1. Friend, E. H., “Sorting on electronic computer systems”, Journal of the ACM, 3(4), pp.134–168 (1956)
2. Iverson, K. E., A Programming Language, John Wiley & Sons (1962)
3. Knuth, D.E., The Art of Computer Programming, Volume 3 Sorting and Searching (1973)
4. Bose, R.C., Nelson, R.J., “A sorting problem”, Journal of the ACM, 9(2), pp.282-296 (1962)
5. McCracken, D., Weiss, H., Lee, T. Programming Business Computers. John Wiley (1959)
6. Simiu, E., Filliben, J.J., “Statistical analysis of extreme winds“, National Bureau of Standards (NBS), Technical Note 868 (June, 1975)

# Why new algorithms are hard to develop

Any course on algorithms will undoubtedly explore sorting algorithms – they form one of the fundamental things that computers do. There is the uber inefficient Bubblesort, the recursive Quicksort, the tree-based Heapsort, or Radixsort which doesn’t use comparisons. But in the past few decades there has been very little in the way of innovation in sorting. Why is this? It could be because sorting algorithms have matured when it comes to their defining structures, in the same way that few new control structures have been added to programming languages. It could also be a lack of interest in defining new algorithms. That’s not to say that “new” algorithms haven’t appeared, but they are often in the guise of extensions to existing algorithms, e.g. Shellsort was just an extension of Insertion sort, shaker sort is just a bidirectional Bubblesort. Or perhaps they are hybrids, e.g. Timsort, implemented by Tim Peters in 2002 is a hybrid of mergesort and insertion sort. Sometimes “new” algorithms are just old algorithms with added efficiencies, or ported to a new technology, e.g. parallelism. In reality once algorithms get to a certain point it may be impossible to make them more efficient. It’s like trying to run 100m under 9 seconds – it’s almost impossible.

Like so many things, algorithms are constrained by humans ability to derive new ideas. Sometimes there are no new ways of perceiving a problem, at least not in the constraints of how we see the world. If there were unlimited solutions, we would have fewer issues in the world today. Sometimes a tweak to an algorithm does little in the way of increasing speed or producing a better outcome. There was a glut of algorithms in the early years of computing because it was a new field, and computer scientists looked for new, more efficient algorithms based on the constraints of the hardware.

Coming up with a new core algorithm is difficult. Sometimes it involves more than algorithmic thinking, it requires looking beyond computer science to other disciplines for inspiration. For example Ant Colony Optimization (ACO), used for such things as vehicle routing was introduced by Marco Dorigo in 1992, and was inspired by studying the behaviour of real ants searching for their food source. Sometimes we are constrained by our own abilities. For example the human visual system is a marvel of efficiency, and with it we are able to interpret the world around us, but due to its abilities it is hard to create an artificial system which is anywhere as good at segmenting and identifying features in an image. Innovation also suffers from the “lowest hanging fruit” problem. At the beginnings of a new development, it is easier to create and extend ideas, but as these incremental changes are made there comes a point where it becomes more challenging to make changes – just as it is becomes challenging to reach to fruit at the top of the tree. The iPhone make great strides in the early years, but there is very little in the way of true innovation in a modern iPhone (apart from the cameras). Similarly, when was the last truly innovation app created?

It was easy to come up the algorithms in the early years of computing because so few existed (except existing mathematical algorithms). When images started to be automatically processed in the 1960s, it was straightforward to apply the mathematical mean to the task of reducing noise, because its 1D cousin the “moving averages filter” was already being used in signal processing. The median filter was just a statistical extension used to remove “impulse” noise from images. In the intervening years extensions like weighed median filtering were introduced among others, but few have produced results any better than the original algorithms (this may be because ultimately these type of algorithms are based on statistics). In reality the algorithms don’t really need to be much better, because images data is much better than it was in the 1960s – to the point where if a “new” algorithms claim to fame is recovering an image with 90% noise, then one wonders what sort of crappy technology acquired the image in the first place.

# Recursive patterns – lightning bolts

Another naturally occurring fractal pattern is a lightning bolt. Lightning of course does not travel in straight line, but rather follows a chaotic, jagged path. The pathway of the lightning bolt is a jagged fractal in 3D space, but can be simulated in 2D using the midpoint displacement method. Below is the Processing code to produce a basic lightning strike.

```float detail=0.5;

// Simulates lightning bolts using the midpoint displacement and recursion
// detail = large, fewer and longer line segments

void setup() {
size(300, 500);
background(255);
fill(0);
noLoop();
}

void draw() {
drawLightning(50,50,200,400,150);
}

void drawLightning(float x1, float y1, float x2, float y2, float disp)
{
float midX, midY;
if (disp < detail)
line(x1,y1,x2,y2);
else {
midX = (x1+x2)/2;
midY = (y1+y2)/2;
midX = midX + (random(0.0,1.0)-0.5) * disp;
midY = midY + (random(0.0,1.0)-0.5) * disp;
drawLightning(x1,y1,midX,midY,disp/2.0);
drawLightning(x2,y2,midX,midY,disp/2.0);
}
}
```

Below are some examples. Each starts at position (50,50) in the window, and ends at position (200,400).

# Governments making software “in house” is just a bad idea

With many provinces going the route of vaccine passports (which are an extremely good idea), each seems to be taking its own approach. Of course the best approach would have been a federally developed vaccine passport app, but apparently that’s in the “too hard” basket. Quebec of course did it the right way and had their app developed by a commercial company, Akinox, who do this sort of thing for a living. It apparently took them 4 months, which seems reasonable. Ontario on the other hand is making their app “in house“, in 6 weeks. Get ready for the gong show. The work is being done by Ontario Digital Service, but anyone with an inkling of knowledge about the way governments run knows that this is just not a good idea. Really. From a look at their website ODS seems to be good at creating digital policies on things like accessability and the like, but their posts on Medium say nothing about app development. You can’t just “pop” out an app, you need people experienced in developing them.

First of all, apps for this already exist, so why reinvent the wheel. Ontario could have adopted Quebec’s app and made appropriate changes… I mean most of the work has been done already. I may be less skeptical if the Ontario government (and I mean government, as an entity, not tied to any one political party), had a good track record in software development, but they don’t. I mean not even software giants always get it right. The biggest disaster has been the eHealth Ontario. Work started in 2002, and by 2016 it had cost C\$8 billion and still wasn’t finished. To date I doubt few patients really know how the system works. Look, honestly governments shouldn’t be in the software business. Do they have the resources to properly test the app? Are they ready to update and maintain the app? Likely not.

There have been successes of course, the Canada COVID Alert app being a good example. It was developed by Health Canada, Innovation, Science and Economic Development Canada, the Canadian Digital Service, and the Ontario Digital Service. It builds upon an exposure notification solution developed by Shopify volunteers in coordination with the non-profit Linux Foundation Public Health. But it was developed by volunteers from Shopify, and tested for security related issues by BlackBerry – so not really “in-house”. The vaccine passport app isn’t as easy either, because it generally requires not only a user-side app, but also an app for people who operate businesses or run activities and who must verify the vaccination status of customers or participants.

Honestly, we live in place that is often touted as being Silicon Valley North, so let’s leave the app development to people who have a real clue about it, and leave the government to make policy, because that’s maybe what their best at?