Image sharpening – image content and filter types

Using a sharpening filter is really contingent upon the content of an image. Increasing the size of a filter may have some impact, but it may also have no perceptible impact – what-so-ever. Consider the following photograph.

The image (which is 1500×2000 pixels – down sampled from a 12MP image) contains a lot of fine details, from the stores signage, to small objects in the window, text throughout the image, and even the lines on the pavement. So sharpening would have an impact on the visual acuity of this image. Here is the image sharpened using the “Unsharp Mask” filter in ImageJ ((radius=10, MW=0.3). You can see the image has been sharpened, as much by the increase in contrast than anything else.

Here is a close-up of two regions:

Pre-filtering (left) vs. post-sharpening (right)

Now consider this image of a landscape:

The impact of sharpening will be reduced in most of the image, and will really only manifest itself in the very thin linear structures, such as the trees. Sharpening tends to work best on features of interest with existing contrast between the feature and its surrounding area. Features that are too thin can sometimes become distorted. Indeed sometimes large photographs do not need any sharpening, because the human eye has the ability to interpret the details in the photograph, and increasing sharpness may just distort that. Again this is one of the reasons image processing relies heavily on aesthetic appeal.

Here is the image sharpened using the same parameters as the previous example:

There is a small change in contrast, most noticeable in the linear structures, such as the birch trees.  Again the filter uses contrast to improve acuity (Note that if the filter were small, say with a radius of 3 pixels, the result would be minimal). Here is a close-up of two regions.

Pre-filtering (left) vs. post-sharpening (right)

Note that the type of filter also impacts the quality of the sharpening. Compare the above results with those of the ImageJ “Sharpen” filter, which uses a kernel of the form:

ImageJ “Sharpen” filter

Notice that the “Sharpen” filter produces more detail, but at the expense of possibly overshooting some regions in the image, and making it appear grainy. There is such as thing as too much sharpening.

Original vs. ImageJ “Unsharp Masking” filter vs. ImageJ “Sharpen” filter

So in conclusion, the aesthetic appeal of an image which has been sharpened is a combination of the type of filter used, the strength/size of the filter, and the content of the image.

Advertisements

Some of the issues with taking holiday pictures

As I have mentioned before, some of the problems associated with digital photographs manifest themselves at the acquisition stage of the process, i.e. the human part. It is impossible to avoid, primarily because the human visual system perceives things quite differently from a camera. They are not the same, nor will they ever be. It’s impossible. This is partially because the human visual system compensates for things the camera can not. One good example is shadows. It is challenging for a photo with a hard shadow in it to appear aesthetically pleasing, because a digital camera will likely make the shadow appear quite harsh. Try and compensate for it, and the lighter areas appear washed-out. Your eyes will generally compensate for the shadows. Here are some of the issues with taking holiday snaps.

  • It is not easy to take photographs of scenes with hard shadows.
  • The AUTO setting does not guarantee a good photograph, and neither does M (manual). Ideally shooting in P (program) mode probably gives the most sense of flexibility.
  • Shooting photographs from a moving object, e.g. a train requires the use of S (shutter priority). You may not get good results from a mobile device, because they are not designed for that.
  • Using a flash for landscapes is useless.
  • Photographing landscapes is not a trivial task.
  • Mobile devices are not that great for holiday pictures (even iPhones).
  • Huge lenses are a waste of money (and heavy).
  • A super sunny day may not always provide for the best photographs.

If you get a travel photograph wrong, it’s very hard to fix it, short of going back to where you took the shot.

Mach bands and the perception of images

Photographs, and the results obtained through image processing are at the mercy of the human visual system. A machine cannot interpret how visually appealing an image is, because aesthetic perception is different for everyone. Image sharpening takes advantage of one of the tricks of our visual system. Human eyes see what are termed “Mach bands” at the edges of sharp transitions, which affect how we perceive images. This optical illusion was first explained by Austrian physicist and philosopher Ernst Mach (1838–1916) in 1865. Mach discovered how our eyes leverage the use of contrast to compensate for its inability to resolve fine detail. Consider the image below containing ten squares of differing levels of gray.

Notice how the gray squares appear to scallop, with a lighter band on the left, and a darker band on the right of the squares? This is an optical illusion, in fact the gray squares are all uniform in intensity. To resolve the brain/eyes deficiency in being able to resolve detail, incoming light gets processed in such a manner than the contrast between two different tones is exaggerated. This gives the perception of more detail. The dark and light bands seen on either side of the gradation are the Mach bands. Here is an example of what human eyes see:

What does this have to do with image sharpening? The human brain perceives exaggerated intensity changes near edges – so image sharpening uses this notion to introduce faux Mach bands by amplifying intensity edges. Consider as an example the following  image, which basically shows two mountain sides, one behind the other. Without looking too closely you can see the Mach bands.

Taking a profile perpendicular to the mountain sides provides an indication of the intensity values along the profile, and shows the edges.

 

The profile shows three plateaus, and two cliffs (the cliffs are ignored by the human eyes). The first plateau is the foreground mountainside, the middle plateau is the mountainside behind that, and the uppermost plateau is some cloud cover. Now we apply an unsharp masking filter to the image, to sharpen the image (radius=10, mask weight=0.4)

Notice how the UM filter has the effect of adding a Mach band to each of the cliff regions.

 

 

 

Apple and design

Check out this post on why Apple is Really Bad at Design. Apple has made some nice products over the years, but its hold on design has kind-of waned recently. What’s cool about the latest iPhones? Not much. At the end of the day, all mobile devices are just that – mobile devices. They haven’t evolved that much in recent years. Cameras get more complex (dual cameras, optical zoom), and the aesthetics of the phones changes – but other than that there are no fancy things. I certainly wouldn’t use an iPhone as my sole travel camera. Have come to the point of design blah in the world of digital products?

 

Image sharpening in colour – how to avoid colour shifts

It is unavoidable – processing colour images using some types of algorithms may cause subtle changes in the colour of an image which affect its aesthetic value. We have seen this in certain forms of the unsharp masking parameters used in ImageJ. How do we avoid this? One way is to create a more complicated algorithm, but the reality is that without knowing exactly what object is at each pixel, this is impossible. Another way, which is way more convenient is to use a separable colour space. RGB is not separable. The red, green and blue components must work together to form an image. Modify one of these components, and it will have an affect on the rest of them. However if we use a colour space such as HSV (Hue-Saturation-Value), HSB (Hue-Saturation-Brightness) or CIELab, we can avoid colour shifts altogether. This is because these colour spaces separate luminance from colour information, therefore image sharpening can be performed on the luminance layer only – something known as luminance sharpening.

Luminance,  brightness, or intensity can be thought of as the “structural” information in the image. For example first we convert an image from RGB to HSB, then process only the brightness layer of the HSB image. Then convert back to RGB.

Here is the original image:

Here is the RGB processed image (UM, radius=10, MW=0.5):

Note the subtle changes in colour in the region surrounding the letters? This sort of colour shift should be avoided. Now below is the HSB processed image using the same parameters applied to only the brightness layer:

Notice that there are acuity improvements in both halves of the image, however it is more apparent in the right half, “rent K”. The black objects in the left half, have had their contrast improved, i.e. the black got blacker against the yellow background, and hence their acuity has been marginally enhanced.

Different types of unsharp masking

There are various types of image sharpening which come under the banner of “unsharp masking”. Some are subtractive, i.e. involve subtracting a blurred copy of an image, whilst others are additive, i.e. they add a high-pass filtered image to the original. Let’s look at a few of them, applying them again to the sub-images taken the coffee image.

First off, let’s look at a simple example of an additive filter. The filter we are going to use is:

The result of this filter is:

Now, if we add this image to the original:

The image has been slightly sharpened, but at the expense of also sharpening any noise in the image.(click on the image to see the detail). Here is another additive filter, one which is more commonly used:

And the corresponding result:

Now consider the use of traditional “subtract the blurred image” unsharp masking. It is actually not as easy as just subtracting a blurred version of the image. The diagram below shows the process.

Now the result of using a Gaussian blur filter with a radius of 10 (and k=1) is shown below:

 

 

 

 

Unsharp masking in ImageJ – changing parameters

In the previous post we looked at whether image blur could be fixed, and concluded that some of it could be slightly reduced, but heavy blur likely could not. Here is the image we used, showing blur at two ends of the spectrum.

Now the “Unsharp Masking” filter in ImageJ, is not terribly different from that found in other applications. It allows the user to specify a “radius” for the Gaussian blur filter, and a mask weight (0.1-0.9). How does modifying the parameters affect the filtered image? Here are some examples using a radius of 10 pixels, and a variable mask weight.

Radius = 10; Mask weight = 0.25

Radius = 10; Mask weight = 0.5

Radius = 10; Mask weight = 0.75

We can see that as the mask weight increases, the contrast change begins to affect the colour in the image. Our eyes may perceive the “rent K” text to be sharper in the third image with MW=0.75, but the colour has been impacted in such as way that the image aesthetics have been compromised. There is little change to the acuity of the “Mölle” text (apart from the colour contrast). A change in contrast can certainly improve the visibility of detail in the image (i.e. they are easier to discern), however maybe not their actual acuity. It is sometimes a trick of the eye.

What about if we changed the radius? Does a larger radius make a difference? Here is what happens when we use a radius of 40 pixels, and a MW=0.25.

Again, the contrast is slightly increased, and perceptual acuity may be marginally improved, but again this is likely due to the contrast element of the filter.

Note that using a small filter size, e.g. 3-5 pixels in a large image (12-16MP) will have little effect, unless there are features in the image that size. For example, in an image containing features 1-2 pixels in width (e.g. a macro image), this might be appropriate, however will likely do very little in a landscape image. (More on this later)

 

 

 

Can blurry images be fixed?

As I mentioned in a previous post, some photographs contain blur which is very challenging to remove. Large scale blur, which is the result of motion, or defocus can’t really be suppressed in any meaningful manner. What can usually be achieved by means of image sharpening algorithms is that finer structures in an image can be made to look more crisp. Take for example the coffee can image shown below, in which the upper lettering on the label in almost in focus, while the lower lettering has the softer appearance associated with de-focus.

The problem with this image is partially the fact that the blur is not uniform. Below are two regions enlarged:containing text from opposite ends of the blur spectrum.

Reducing blur, involves a concept known as image sharpening (which is different from removing motion blur, a much more challenging task). The easiest technique for image sharpening, and the one most often found in software such as Photoshop is known as unsharp masking. It is derived from analog photography, and basically works by subtracting a blurry version of the original image from the original image. It is by no means perfect, and is problematic in images where there is noise, as it tends to accentuate the noise, but it is simple.

Here I am using the “Unsharp Mask” filter from ImageJ. It subtracts a blurred copy of the image and rescales the image to obtain the same contrast of low frequency structures as in the input image. It works in the following manner:

  1. Obtain a Gaussian blurred image, by specifying a blur radius (in the example below the radius = 5).
  2. Filter the blurred image using a “Mask Weight, which determines the strength of filtering. A value from 0.1-0.9. (In the example below, the mask weight =0.4)
  3. Subtract the filtering image from the original image.
  4. Divide the resulting image by (1.0-mask weight) – 0.6 in the case of the example.

1. Original image; 2. Gaussian blurred image (radius=5); 3. Filtered image (multiplied by 0.4); 4. Subtracted image (original-filtered); 5. Final image (subtracted image / 0.6)

If we compare the resulting images, using an enlarged region, we find the unsharp masking filter has slightly improved the sharpness of the text in the image, but this may also be attributed to the slight enhancement in contrast. This part of the original image has less blur though, so let’s apply the filter to the second image.

The original image (left) vs. the filtered image (right)

Below is the result on the second portion of the image. There is next to no improvement in the sharpness of the image. So while it may be possible to slightly improve sharpness, where the picture is not badly blurred, excessive blur is impossible to “remove”. Improvements in acuity may be more to the slight contrast adjustments and how they are perceived by the eye.

In the next post we’ll see if adjusting parameters in ImageJ makes a difference.

The usability of old blenders

In many respects, old household appliances hold quite a lot of information about usability, especially those from the 1960s and 1970s. Although “time” saving kitchen appliances had already been around since the 1920s, it was when they started to become more complex and offer interface elements that things started to get interesting. Let’s consider the interface of the Oster “dual pulse matic 10” shown below:

The first thing to notice are the easy to push buttons – there are five buttons for 10 speeds, with the alternator being the dial “Lo” and “Hi” settings. The confusing part of the interface is the fact that there is both an “ON” button, and an “Off” setting on the dial. So one would imagine, maybe the first two choices were “PULSE” or “ON”, and if “ON” was chosen then one could set the dial to “Lo” or “Hi”. More confusing are the 10 blend settings:

Stir → Puree → Whip → Grate → Mix → Chop → Grind → Blend → Liquefy → Frappé

Some of these terms don’t even make sense with respect to blenders. Basically a 10-point scale doesn’t necessarily need a descriptive word for each level. For instance, the word Frappé is not a verb, mix and blend essentially mean the same thing, and blenders don’t really grate food. Here is another blender from Oster, with a simpler On/Off dial, a pulse button, and eight buttons, each with two functions – making the number of functions even more complex.

More ridiculous are some of the labels like “crumb”, and “crush”. So, while the interface has improved in some usability aspects, it still suffers from over an inability to indicate what each button really does. Having 16 different buttons, is also overkill – having only 8, or better still a dial, would work infinitely better. Not to say modern blenders are any better. Some use less buttons, but still use confusing nomenclature. One of the few which has a well designed user interface is the Vitamix series of blenders – a simple 1-10 dial, and On-Off toggle switch, and a toggle switch to convert between High and Variable speed.

 

 

Photographic blur you can’t get rid of

Photographs sometimes contain blur. Sometimes the blur is so bad that it can’t be removed, no matter the algorithm. Algorithms can’t solve everything, even those based on physics. Photography ultimately exists because of the existence of glass lenses – you can’t make any sort of camera without them. Lenses have aberrations (although lenses these days are pretty flawless) – some of these can be dealt with in-situ using corrective algorithms.

Some of this blur is attributable to vibration – no one has hands *that* steady, and tripods aren’t always convenient. Image stabilization, or vibration reduction has done a great job in retaining image sharpness. This is especially important in low-light situations where the photograph may require a longer exposure. The rule of thumb is that a camera should not be hand-held at shutter speeds slower than the equivalent focal length of the lens. So a 200mm lens should not be handheld at speeds slower than 1/200 sec.

Sometimes though, the screen on a digital camera doesn’t tell the full story either. The resolution may be too small to appreciate the sharpness present in the image – and a small amount of blur can reduce the quality of an image. Here is a photograph taken in a low light situation, which, with the wrong settings, resulted in a longer exposure time, and blur.

Another instance relates to close-up, or macro photography, where the depth-of-field can be quiet shallow. Here is an  example of a close-up shot of the handle of a Norwegian mangle board. The central portion of the horse, near the saddle, is in focus, the parts to either side are not – and this form of blur is impossible to suppress. Ideally in order to have the entire handle in focus, one would have to use a technique known as focus stacking (available in some cameras).

Here is another example of a can where the writing at the top of the can is almost in focus, whereas the writing at the bottom is out-of-focus – due in part to the angle the shot was taken, and the shallow depth of field. It may be possible to sharpen the upper text, but reducing the blur at the bottom may be challenging.