IRIS RECOGNITION STEPS
Iris recognition relies on the unique patterns of the human iris to identify or verify the identity of an individual. For iris recognition, an input video stream is taken using Infra-red sensitive CCD camera and the frame grabber. From this video stream eye is localized using various image processing algorithms. Area of interest i.e. iris is then detected from the eye and the features are extracted. These features are encoded into pattern which is stored in the database for enrollment and are matched with the database for authentication.
To achieve automated iris recognition, following are the main steps:
1. Iris Isolation
Locating the iris is not a trivial task since its intensity is close to that of the sclera and is often obscured by eyelashes and eyelids. However the pupil, due to its regular size and uniform dark shade, is relatively easy to locate. The pupil and iris can be approximated as concentric and this provides a reliable entry point for auto detection.
The first step in iris detection (also known as preprocessing) is to detect the pupil. The pupil’s intensity and location are fairly consistent in most images and so it lends itself well to auto-detection. Usually, the image captured for iris has many undesired parts like eyelids, pupil etc.
In the literature, multiple ways of detecting the pupil have been presented. Some use edge information whereas others utilize thresholding. In simplest form, search for the largest circular black area is made and that area is treated as pupil.
The circular Hough transform, a standard computer vision algorithm, is commonly employed to deduce the radius and centre coordinates of the pupil and iris regions. Some researchers have made use of the parabolic Hough transform to detect the eyelids, approximating the upper and lower eyelids with parabolic arcs.
Daugman’s Integro-differential Operator, Active Contour Models are some of the other algorithms proposed in the literature. Since the problem is trivial, most methods work well enough.
Once we have the location of the pupil clearly defined, the complexity of locating the iris is somewhat reduced due to the relative concentricity of the pupil and iris. In contrast to pupil detection, eficiently locating the iris is somewhat more complicated due to (i) the obstruction of the iris by the eyelids for most eyes, (ii) its irregular pattern and (iii) because of the relative similarity with the iris boundary and the sclera.
The outer boundaries of iris are detected with the help of center of pupil. The binary image is taken and concentric circles of different radii are drawn with respect to center of pupil. For a particular circle the change in intensity between normal pointing toward center and away from center is measured. The radius having highest change in intensity is considered as outer boundary.
An image of an eye roughly has three intensities, from darkest to lightest: pupil, iris and sclera, and eyelids. Hence a suitable thresholding method is able to separate the eyelids from the rest of the image. Typically, eyelid detection is incorporated into the iris finding algorithm so that pixels on the eyelids are ignored. The locations of eyelid boundaries are estimated on each iteration. This approach has the advantage that no separate algorithm is needed in finding the eyelid.
Iris normalization is done in order to make the image independent of the dimensions of the input image.
Once the iris region is successfully segmented from an eye image, the next stage is to transform the iris region so that it has fixed dimensions in order to allow comparisons. The dimensional inconsistencies between eye images are mainly due to the stretching of the iris caused by pupil dilation from varying levels of illumination. Other sources of inconsistency include, varying imaging distance, rotation of the camera, head tilt, and rotation of the eye within the eye socket. The normalization process will produce iris regions, which have the same constant dimensions, so that two photographs of the same iris under different conditions will have characteristic features at the same spatial location.
For normalization, Daugman’s Rubber Sheet Model, Image Registration technique (proposed by Wildes et al.), Virtual Circles (proposed by Boles) are employed.
Filed Under: Recent Articles