# What’s wrong with image processing?

The field of image processing has been around for over 50 years now. It has benefitted many fields including remote sensing, medical imaging, and machine vision to name but a few. Many good algorithms have appeared over the years – good simple algorithms. Yet the field seems to have devolved slightly into one where good algorithms are overshadowed by a vast cornucopia of algorithms which barely make any contribution to the field, or are tested in such a manner where one wonders what benefit they bring? Sometimes the title and the content just don’t gel. Take for instance a paper I recently looked at whose title was something along the lines of “image shaprening via histogram equalization”. Now image sharpening is suppose to improve the acuity of an image – and histogram equalization works on improving the contrast. Not to say that they are mutually exclusive, but many increases in image contrast will perceptually increase sharpness.

One of the interesting aspects of this paper was the conjecture that a new histogram equalization algorithm can be produced using an image A, and its negative, Ai, such that:

```Ai = negative(A)
A_he = histogramEqualization(A)
Ai_he = histogramEqualization(Ai)
Aii = negative(Ai_he)

Av = imageAverage(A_he, Aii)```

The image created by averaging A_he and Aii would be significant enough, and produce a better enhancement result. The reality is there is no real perceivable difference between A_he and Av.

The problem with image processing is that people are creating algorithms that in reality are kind-of crappy, and don’t work any better (sometimes worse) than existing algorithms. There is a reason Otsu’s thresholding algorithm is so commonly used – because it does a good generic job at thresholding. It isn’t perfect, but this is image processing, where there are infinite different images that could be processed.

Let’s consider contrast-enhancement. Histogram equalization is the classic, simple algorithm used for “enhancing” the contrast of an image. Extensions abound: BBHE (brightness preserving bi-histogram equalization)…

• BBFHE: Block-based Binomial Filtering Histogram Equalization
• DSIHE: Dualistic Sub Image Histogram Equalization
• MMBEBHE: Minimum Mean Brightness Error Bi-histogram Equalization
• RMSHE: Recursive Mean-Separate Histogram Equalization
• RSIHE: Recursive Sub-Image Histogram Equalization
• RSWHE: Recursively Separated and Weighted Histogram Equalization
• DHE: Dynamic Histogram Equalization
• CLAHE: Contrast-Limited Adaptive Histogram Equalization
• BPDHE: Brightness Preserving Dynamic Histogram Equalization
• BPDFHE: Brightness Preserving Dynamic Fuzzy Histogram Equalization
• BHEPL-D: Bi-Histogram Equalization Median Plateau Limit
• BBPHE: Background Brightness Preserving Histogram Equalization
• ESIHE: Exposure-based Sub Image Histogram Equalization
• ESIHE: Recursive Exposure-based Sub Image Histogram Equalization
• MMSICHE: Median- Mean based Sub-Image Clipped Histogram Equalization

Get the picture?

Okay, some of these algorithms probably work quite well, BBHE for example. Others? Well their results can often be somewhat “sketchy”. And all of them take fore granted one point – an image is more than a distribution of intensity values in a histogram. There are features in the image, and unless they are distinct, may not be enhanced well. There is no panacea for enhancing images. Sometimes I think it’s just a competition to come up with the weirdest acronym for an algorithm. More shocking is that these algorithms get published, considering their lack-luster testing (or the ability for others to reconstruct the algorithm).