An example of the Hough transform – pupil segmentation

In a previous post I talk about using the Hough transform in counting logs in a pile. Now let’s turn our sights on to a simpler problem – that of extracting the pupil from the image of a eye, the first step in many eye-tracking algorithms. Now extracting the pupil from an image of the eye is usually quite simple, even though taking a image of the eye is not without problems. Taking an image of an eye can lead to reflections of what the eye is looking at, and also lighting artifacts. Here is a sample image (from wikipedia).

The pupil is normally uniformly dark in colour, and the iris which surrounds it is more variegated, but also well differentiated from the remainder of the eye. This image of course is a near perfect image, because both the pupil and the iris are clearly visible. In many images of the eye, portions of the iris may be obstructed by the eye lids.

So extracting the pupil must be an easy task using the Hough transform, right? Think again. Even beautifully round objects are hard to extract automatically, partially because of the number of differing parameters required by a function like OpenCV’s  cv2.HoughCircles(). Some parameters control the type of edge based segmentation performed on the grayscale version of the above image, and getting those parameters right often requires some intensive tweaking. Once you have found the pupil in one image, try replicating that in another. It’s not easy. Oh, it works fine in many of the examples showing how to use the function because they are often easy examples. It is a perfect illustration of an algorithm that seems useful, until you actually use it. There is no free lunch.

Here is an example of using the Hough transform. First we take an image with what are essentially some randomly placed black circles on a white background. Should be an easy task to find the circles using a Hough transform.

Below is the code we used:

import cv2
import numpy as np

filename = raw_input('Enter a file name: ')

# Read in grayscale version of image
imgG = cv2.imread(filename,0)
# Read in colour version of image
imgO = cv2.imread(filename)

# Process the image for circles using the Hough transform
circles = cv2.HoughCircles(imgG, cv2.HOUGH_GRADIENT, 1, 50, 
              param1=30, param2=15, minRadius=0, maxRadius=0)

# Determine if any circles were found
if circles is None:
    print "No circles found"
else:
    # convert the (x, y) coordinates and radius 
    # of the circles to integers
    circles = np.round(circles[0, :]).astype("int")

    # draw the circles
    for (x, y, r) in circles:
        cv2.circle(imgO, (x, y), r, (255, 255, 0), 1)

    # display the image
    cv2.imshow("output", imgO)
    cv2.waitKey(0)

Below is the result, with the cyan circles denoting the circles found by the Hough transform.

It’s by no means perfect, *but* it has found all the circles. So now let’s test the same code on the eye image. First we convert the image to grayscale (done in the program), and then apply HoughCircles() with the same parameters. Here is the result:

Did it find the pupil? … Not really. It found plenty of things that aren’t circular objects, and it got close, but certainly didn’t outline the pupil in any sort of effective manner (never mind all the false-positives). What about if we preprocess it? Say, grayscale it, then apply a local threshold (say the algorithm of Phansalkar). This is the pre-processed image:

Now here is the Hough transformed image:

Worse? Oh, SO much more. It has turned every eyelash into a potential circle (because they are arcs I would guess). Whether or not it actually found the pupil is hard to tell. We could likely get better results by tweaking the parameters – but how long would this take?

Another approach might be to apply Canny thresholding before submitting the image to cv2.HoughCircles(). Will this work? The problem with Canny is inherently that it too has parameters, in the guise of high and low thresholds. This problem can be alleviated by using Otsu thresholding to determine the thresholds for Canny [1]. Here is the modified code:

import cv2
import numpy as np

filename = raw_input('Enter a file name: ')

imgG = cv2.imread(filename,0)
imgO = cv2.imread(filename)

thV,thI = cv2.threshold(imgG,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
highTH = thV
lowTH = thV / 2

# Find the binary image with edges from the thresholded image
imgE = cv2.Canny(imgG, threshold1=lowTH, threshold2=highTH)
cv2.imwrite('eyeCanny.png', imgE)

# Process the image for circles using the Hough transform
circles = cv2.HoughCircles(imgE,cv2.HOUGH_GRADIENT, 2, 30, 
          param1=30, param2=150, minRadius=0, maxRadius=150)

# Determine if any circles were found
if circles is None:
    print "No circles found"
else:
    # convert the (x, y) coordinates and radius of the 
    # circles to integers
    circles = np.round(circles[0, :]).astype("int")

    # draw the circles
    for (x, y, r) in circles:
        cv2.circle(imgO, (x, y), r, (255, 2550, 0), 1)

    cv2.imwrite("houghOutput.png", imgO)

We have modified some of the HoughCircle() parameters, and added a value of 150 to maxRadius. Here is the output from the Canny edge detector:

Here is the subsequent output from the Hough transform:

Did it find the pupil? No. So it seems that extracting the pupil within an image of the eye is not exactly a trivial task using a circular Hough transform.

[1] The Study of an Application of Otsu Method in Canny Operator

 

2 thoughts on “An example of the Hough transform – pupil segmentation

  1. Hello, Nice try. I’m trying to hit the pupil. And as well I can’t. At the end, do you hit the pupil ?
    I’m working on my graduation project and I really need need to do it. Any help will be awesome! Thank you in advance.

    1. The Hough transform is not ideal… check out the post on “SEGMENTING EYES THROUGH THRESHOLDING”. Like many tasks in image processing, extracting
      pupils seems easy, but may be inherently challenging.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.