Source: Gizmag
The worst has happened. You receive an emailed kidnap demand with a picture of your loved one in dire straits. You contact the authorities, and in a flash (relatively speaking), they have identified the kidnapper and possibly some accomplices, and are well on their way toward recovering the victim. How did this happen? By identifying the faces of the kidnappers caught in the reflection of your loved one's eyes.
The scenario above isn't yet standard practice, but the basic technology for accomplishing the task now exists. Familiar faces can be recognized from a very small number of pixels, as small as 7 x 10 pixels in one study. A very familiar example appears below. The image on the left has 16 x 20 pixel resolution, while on the right the same image is blurred to make recognition easier.
The worst has happened. You receive an emailed kidnap demand with a picture of your loved one in dire straits. You contact the authorities, and in a flash (relatively speaking), they have identified the kidnapper and possibly some accomplices, and are well on their way toward recovering the victim. How did this happen? By identifying the faces of the kidnappers caught in the reflection of your loved one's eyes.
The scenario above isn't yet standard practice, but the basic technology for accomplishing the task now exists. Familiar faces can be recognized from a very small number of pixels, as small as 7 x 10 pixels in one study. A very familiar example appears below. The image on the left has 16 x 20 pixel resolution, while on the right the same image is blurred to make recognition easier.
It is now commonplace for digital cameras to have 10-50 megapixel CMOS sensors. There is even a smartphone, the Nokia Lumia 1020, that has a 41-MP sensor. (Although this camera automatically generates an oversampled 5-MP image from the raw data, the raw data is still available for use.)
A 50 mm equivalent lens covers a horizontal angle of about 40 degrees. With a 40-MP sensor (and good optics), each pixel is about one-third of a minute of arc in size, enabling resolution about five times more acute than that of the human eye. In addition, a good picture captures everything within the bit depth of the pixels, whereas our eyes have a very small area of high resolution on the retina, and our brains fill in the details, often incorrectly. A camera captures a lot of information which we cannot "see at a glance," or even by careful examination.
A study just carried out by Dr. Rob Jenkins of the University of York and Christie Kerr of the University of Glasgow, both in the UK, has found that the picture of a high-end camera is capable of seeing images reflected from the corneas of a subject being photographed. The images, which can be of high enough quality to identify people by their faces, cover most of the area in front of the subject, owing to the curvature of the cornea. In essence, a fisheye view of the entire region in front of the subject can be found in the image of the subject's eyes.
The lead photograph provides an excellent example of just how much information can be contained in corneal reflections. One has the definite impression that, if these people were known to an observer, their images would be recognizable.
The Jenkins/Kerr study has shown quite clearly that there is enough information in a corneal reflection taken under rather favorable conditions to identify people near the camera with a good deal of certainty.There are two remaining pieces of the puzzle to make a routine forensic (or surveillance) tool out of this concept. First, cameras and image processing need to become a bit more capable, as normally the lighting will not be as favorable for capturing corneal reflections as in this study. Second, methods of automated facial recognition that compare to criminal databases and social media, which have recently achieved remarkable fidelity, must be adapted to the image distortions that appear naturally in the corneal reflections.