Apple and design

Check out this post on why Apple is Really Bad at Design. Apple has made some nice products over the years, but its hold on design has kind-of waned recently. What’s cool about the latest iPhones? Not much. At the end of the day, all mobile devices are just that – mobile devices. They haven’t evolved that much in recent years. Cameras get more complex (dual cameras, optical zoom), and the aesthetics of the phones changes – but other than that there are no fancy things. I certainly wouldn’t use an iPhone as my sole travel camera. Have come to the point of design blah in the world of digital products?

 

Advertisements

Image sharpening in colour – how to avoid colour shifts

It is unavoidable – processing colour images using some types of algorithms may cause subtle changes in the colour of an image which affect its aesthetic value. We have seen this in certain forms of the unsharp masking parameters used in ImageJ. How do we avoid this? One way is to create a more complicated algorithm, but the reality is that without knowing exactly what object is at each pixel, this is impossible. Another way, which is way more convenient is to use a separable colour space. RGB is not separable. The red, green and blue components must work together to form an image. Modify one of these components, and it will have an affect on the rest of them. However if we use a colour space such as HSV (Hue-Saturation-Value), HSB (Hue-Saturation-Brightness) or CIELab, we can avoid colour shifts altogether. This is because these colour spaces separate luminance from colour information, therefore image sharpening can be performed on the luminance layer only – something known as luminance sharpening.

Luminance,  brightness, or intensity can be thought of as the “structural” information in the image. For example first we convert an image from RGB to HSB, then process only the brightness layer of the HSB image. Then convert back to RGB.

Here is the original image:

Here is the RGB processed image (UM, radius=10, MW=0.5):

Note the subtle changes in colour in the region surrounding the letters? This sort of colour shift should be avoided. Now below is the HSB processed image using the same parameters applied to only the brightness layer:

Notice that there are acuity improvements in both halves of the image, however it is more apparent in the right half, “rent K”. The black objects in the left half, have had their contrast improved, i.e. the black got blacker against the yellow background, and hence their acuity has been marginally enhanced.

Different types of unsharp masking

There are various types of image sharpening which come under the banner of “unsharp masking”. Some are subtractive, i.e. involve subtracting a blurred copy of an image, whilst others are additive, i.e. they add a high-pass filtered image to the original. Let’s look at a few of them, applying them again to the sub-images taken the coffee image.

First off, let’s look at a simple example of an additive filter. The filter we are going to use is:

The result of this filter is:

Now, if we add this image to the original:

The image has been slightly sharpened, but at the expense of also sharpening any noise in the image.(click on the image to see the detail). Here is another additive filter, one which is more commonly used:

And the corresponding result:

Now consider the use of traditional “subtract the blurred image” unsharp masking. It is actually not as easy as just subtracting a blurred version of the image. The diagram below shows the process.

Now the result of using a Gaussian blur filter with a radius of 10 (and k=1) is shown below:

 

 

 

 

Unsharp masking in ImageJ – changing parameters

In the previous post we looked at whether image blur could be fixed, and concluded that some of it could be slightly reduced, but heavy blur likely could not. Here is the image we used, showing blur at two ends of the spectrum.

Now the “Unsharp Masking” filter in ImageJ, is not terribly different from that found in other applications. It allows the user to specify a “radius” for the Gaussian blur filter, and a mask weight (0.1-0.9). How does modifying the parameters affect the filtered image? Here are some examples using a radius of 10 pixels, and a variable mask weight.

Radius = 10; Mask weight = 0.25

Radius = 10; Mask weight = 0.5

Radius = 10; Mask weight = 0.75

We can see that as the mask weight increases, the contrast change begins to affect the colour in the image. Our eyes may perceive the “rent K” text to be sharper in the third image with MW=0.75, but the colour has been impacted in such as way that the image aesthetics have been compromised. There is little change to the acuity of the “Mölle” text (apart from the colour contrast). A change in contrast can certainly improve the visibility of detail in the image (i.e. they are easier to discern), however maybe not their actual acuity. It is sometimes a trick of the eye.

What about if we changed the radius? Does a larger radius make a difference? Here is what happens when we use a radius of 40 pixels, and a MW=0.25.

Again, the contrast is slightly increased, and perceptual acuity may be marginally improved, but again this is likely due to the contrast element of the filter.

Note that using a small filter size, e.g. 3-5 pixels in a large image (12-16MP) will have little effect, unless there are features in the image that size. For example, in an image containing features 1-2 pixels in width (e.g. a macro image), this might be appropriate, however will likely do very little in a landscape image. (More on this later)

 

 

 

Can blurry images be fixed?

As I mentioned in a previous post, some photographs contain blur which is very challenging to remove. Large scale blur, which is the result of motion, or defocus can’t really be suppressed in any meaningful manner. What can usually be achieved by means of image sharpening algorithms is that finer structures in an image can be made to look more crisp. Take for example the coffee can image shown below, in which the upper lettering on the label in almost in focus, while the lower lettering has the softer appearance associated with de-focus.

The problem with this image is partially the fact that the blur is not uniform. Below are two regions enlarged:containing text from opposite ends of the blur spectrum.

Reducing blur, involves a concept known as image sharpening (which is different from removing motion blur, a much more challenging task). The easiest technique for image sharpening, and the one most often found in software such as Photoshop is known as unsharp masking. It is derived from analog photography, and basically works by subtracting a blurry version of the original image from the original image. It is by no means perfect, and is problematic in images where there is noise, as it tends to accentuate the noise, but it is simple.

Here I am using the “Unsharp Mask” filter from ImageJ. It subtracts a blurred copy of the image and rescales the image to obtain the same contrast of low frequency structures as in the input image. It works in the following manner:

  1. Obtain a Gaussian blurred image, by specifying a blur radius (in the example below the radius = 5).
  2. Filter the blurred image using a “Mask Weight, which determines the strength of filtering. A value from 0.1-0.9. (In the example below, the mask weight =0.4)
  3. Subtract the filtering image from the original image.
  4. Divide the resulting image by (1.0-mask weight) – 0.6 in the case of the example.

1. Original image; 2. Gaussian blurred image (radius=5); 3. Filtered image (multiplied by 0.4); 4. Subtracted image (original-filtered); 5. Final image (subtracted image / 0.6)

If we compare the resulting images, using an enlarged region, we find the unsharp masking filter has slightly improved the sharpness of the text in the image, but this may also be attributed to the slight enhancement in contrast. This part of the original image has less blur though, so let’s apply the filter to the second image.

The original image (left) vs. the filtered image (right)

Below is the result on the second portion of the image. There is next to no improvement in the sharpness of the image. So while it may be possible to slightly improve sharpness, where the picture is not badly blurred, excessive blur is impossible to “remove”. Improvements in acuity may be more to the slight contrast adjustments and how they are perceived by the eye.

In the next post we’ll see if adjusting parameters in ImageJ makes a difference.

The usability of old blenders

In many respects, old household appliances hold quite a lot of information about usability, especially those from the 1960s and 1970s. Although “time” saving kitchen appliances had already been around since the 1920s, it was when they started to become more complex and offer interface elements that things started to get interesting. Let’s consider the interface of the Oster “dual pulse matic 10” shown below:

The first thing to notice are the easy to push buttons – there are five buttons for 10 speeds, with the alternator being the dial “Lo” and “Hi” settings. The confusing part of the interface is the fact that there is both an “ON” button, and an “Off” setting on the dial. So one would imagine, maybe the first two choices were “PULSE” or “ON”, and if “ON” was chosen then one could set the dial to “Lo” or “Hi”. More confusing are the 10 blend settings:

Stir → Puree → Whip → Grate → Mix → Chop → Grind → Blend → Liquefy → Frappé

Some of these terms don’t even make sense with respect to blenders. Basically a 10-point scale doesn’t necessarily need a descriptive word for each level. For instance, the word Frappé is not a verb, mix and blend essentially mean the same thing, and blenders don’t really grate food. Here is another blender from Oster, with a simpler On/Off dial, a pulse button, and eight buttons, each with two functions – making the number of functions even more complex.

More ridiculous are some of the labels like “crumb”, and “crush”. So, while the interface has improved in some usability aspects, it still suffers from over an inability to indicate what each button really does. Having 16 different buttons, is also overkill – having only 8, or better still a dial, would work infinitely better. Not to say modern blenders are any better. Some use less buttons, but still use confusing nomenclature. One of the few which has a well designed user interface is the Vitamix series of blenders – a simple 1-10 dial, and On-Off toggle switch, and a toggle switch to convert between High and Variable speed.

 

 

Photographic blur you can’t get rid of

Photographs sometimes contain blur. Sometimes the blur is so bad that it can’t be removed, no matter the algorithm. Algorithms can’t solve everything, even those based on physics. Photography ultimately exists because of the existence of glass lenses – you can’t make any sort of camera without them. Lenses have aberrations (although lenses these days are pretty flawless) – some of these can be dealt with in-situ using corrective algorithms.

Some of this blur is attributable to vibration – no one has hands *that* steady, and tripods aren’t always convenient. Image stabilization, or vibration reduction has done a great job in retaining image sharpness. This is especially important in low-light situations where the photograph may require a longer exposure. The rule of thumb is that a camera should not be hand-held at shutter speeds slower than the equivalent focal length of the lens. So a 200mm lens should not be handheld at speeds slower than 1/200 sec.

Sometimes though, the screen on a digital camera doesn’t tell the full story either. The resolution may be too small to appreciate the sharpness present in the image – and a small amount of blur can reduce the quality of an image. Here is a photograph taken in a low light situation, which, with the wrong settings, resulted in a longer exposure time, and blur.

Another instance relates to close-up, or macro photography, where the depth-of-field can be quiet shallow. Here is an  example of a close-up shot of the handle of a Norwegian mangle board. The central portion of the horse, near the saddle, is in focus, the parts to either side are not – and this form of blur is impossible to suppress. Ideally in order to have the entire handle in focus, one would have to use a technique known as focus stacking (available in some cameras).

Here is another example of a can where the writing at the top of the can is almost in focus, whereas the writing at the bottom is out-of-focus – due in part to the angle the shot was taken, and the shallow depth of field. It may be possible to sharpen the upper text, but reducing the blur at the bottom may be challenging.

 

 

 

 

 

Is blogging an academic pursuit?

I don’t know many academics who have a blog. They are probably too busy writing important journal papers. Likely. I’m probably one of a handful. Why? Probably because in the overall scope of things it’s not considered a very “academic” activity. One is better off writing articles for “peer-reviewed” journals. Or so they would have you think. I doubt that many people read my blog, but likely more than those that read any article I have ever written. Did any of them change the world? No – they were never designed to. Nobody writes life changing journal articles. Don’t get me wrong, I enjoy writing, but I like to tell a story more than I like to pontificate about things I find to be wonderful. “This paper discusses the algorithm I created for… blah blah blah… and it works the best!”. If your algorithm could absorb CO2, or clean the oceans of plastic waste I would be impressed. But it can’t. Much research is meaningless, or at least hard to fathom for the average person (is research into the origins of the universe really purposeful?). Some of it is of course useful, e.g. research into things that make our lives better (yeah and that doesn’t include the latest iPhone).

Blogging is interesting because you can talk about snippets of things – that may ultimately be useful, or not. It’s like a notebook of ideas, concepts that can be put out into the ether – hopefully written in some manner that is accessible to everyone – that’s really the whole point of education isn’t it? That, and its a readily-accessible format.

Blogging *is* academic writing and publication. The world has moved on from the likes of boring long-winded articles. And the reality is that blogging can help improve writing skills – not everyone has the ability to write good text – especially those from the science fields who don’t have the benefit of writing copious essays like those in the humanities. Blog posts are small pieces of text – not a 5000 word essay, most are likely 100 words or less. But they are good at discussing various points of interest, and especially good for writing books, because they allow you to concentrate ideas into a simple note that can be expanded upon at a later date. Blogging requires you to be concise. Posts that are far too long will be ignored – it should be about bullet points, lists, and short paragraphs. Some academic writing involves taking one point and making an argument around it – so blogging can actually help support those other things you write (on paper). It is also easy to include things like images, and pieces of code (very important for computer scientists).

Blogging has improved my writing and analytical skills, and blogging allows me to share ideas quickly. A journal paper may take me 14 months to write. So is it an academic pursuit? Yes. Even more so, blogging is a bridge between academia and the world. And yes, maybe blogging doesn’t work in fields like chemistry (or maybe it does?). But in computer science, a field which moves crazily fast, blogging just makes sense. No-one will publish an article on the usability of the iPhone X, but is it important – unequivocally yes. Journals don’t publish articles reviewing programming languages, or debating the validity of recursion. So blogging fills a void so to speak. It also allows me to communicate information with students taking my classes. If someone asks a question on a technical issue, it is easier to write a blog post that benefits everyone, than reply in a single email.

Blogging is also fun, and who doesn’t like fun?

Writing a textbook ≠ get rich quick scheme

Ever been to a bookstore and wander through the cookbook section? There are literally 100s of titles (even more online), with more published every week. Ever wonder if the authors make money? The answer is most likely don’t, and the reasons are simple. Publishing a book costs money. Books have to be edited, printed, marketed, and distributed. So paying $35 for a hardcover cookbook is a bargain. One cookbook shop owner once told me that authors make their money off speaking engagements, demos, etc. Makes sense.

Now writing a textbook is slightly different, as the audience for the book is more constrained. They are also often overpriced, and exist in a captive market, i.e. students often *have* to buy them, and hence both publishers and university bookshops make $$$, and authors make very little. Of course many students who buy a $100 textbook think the author is making a fortune off the books. The opposite is often true. Most authors get a royalty of roughly 10% off the wholesale price of the book. How does this work, well I’ll explain.

A few years ago I wrote a textbook on programming for my introductory class, “Confessions of a Coding Monkey“. It was meant to provide an easy to understand guide to programming, with a whole series of case studies. It was fun to write. However I didn’t quite understand the intricacies of publishing when it came to pricing the book. The book sold at the campus bookstore for C$110. The whole price was somewhere around C$60. So the bookstore made $50, for basically having a book sit on a shelf. Amazing right? My royalty was 10%, so C$6 per book. So:

Publisher: $54
Bookstore: $50
Author: $6

Now, I get that publishing costs money. But the author gets basically 5% of the price the student pays. Then of course you have to pay taxes, and it’s worse if your publisher is in the US. So if you are lucky, of that C$6 you end up with $C3. Barely enough for a coffee. Now you may end up selling thousands of books. Let’s say you’re lucky and you sell 3000 copies over a five year period. That means you likely make $9,000 after tax. How much effort did it take to write? Six months of time? A year? Probably not enough to retire on. You do the math on how much the publisher makes, and the bookstore. Oh, and my book was printed as a paperback in black-and-white, and I did all the illustrations (bar the cover).

People more often than not write books for the love of it. Sure, some authors do make money. Chef Jamie Oliver has over 20 books to his credit, and is hugely successful. His 30-Minute Meals book sold in excess of 1.5 million copies. Having experienced the whole process of publishing with a large publisher, I would never do it again – I’m planning on self-publishing the book I am currently working on (a hybrid book on digital photography/image processing), and offering it as inexpensively as possible to everyone, probably through something like Blurb.

P.S. Interested in more info? Read this article, Everything You Wanted to Know About Book Sales (But Were Afraid to Ask).
P.P.S. From what I gather, textbook authors who *do* make money are those who write books in fields like math, where their textbooks that are widely used, and have a super amount of editions.

Why learning to code is easier in Julia (iii)

Now some will argue that the true tenets of programming aren’t being taught, but the reality is that many people want to learn how to program, but not become programmers. No different from people who want to learn Norwegian, but don’t want to become translators. Explaining how the image becomes inverted is easy as well. First propose a problem:

Problem: Given a digital image write a program to derive the negative of the image.

Now explain some of the concepts:

Explanation: The digital image is in the form of a “text image”, where each “pixel” in the image has an “intensity” value from 0 to 255. So black has the value 0, white has the value 255, and 1 to 254 are various shades of gray from black to white. Here is an example, with the values of four pixels shown:

The negative of an image is its inverse – so black becomes white, white becomes black etc. This is achieved by subtracting 255 from each pixel, and taking the absolute value of the result. Here is the result of calculating the negative of the example above:

So, for the pixel with an intensity value of 115 in the original, 115-255 = -140, the absolute value of which is 140. So 140 is the negative value of the original pixel.

Now write the algorithm:

  1. Read in the text image from a file.
  2. Calculate the negative of the image.
  3. Write the negative image to a file.

Finally show how the program is written:

image = readdlm("forthenry.txt", Int16::Type)
imageNeg = abs(image-255)
writedlm("forthenryNEG4.txt", imageNeg)

Three lines of code.