Incorporating news from O plus E magazine, Tokyo
TSUKUBA CITYResearchers at the Institute of Information Sciences and Electronics at the University of Tsukuba have developed an identification system that uses neural networks to distinguish images of faces. This mechanism can automatically form groups of similar images without inputting which facial image belongs to which person (see figure). It uses an algorithm derived from a neural network called a self-organizing feature map.
The face images are made of pixels. A self-organizing map (SOM) is a type of neural network in which the gradient of the intensity of a pixel is considered to be a component of a vector. A neuron responds to the entire vector. The neurons are arranged in two-dimensional (2-D) space in an orderly fashion. When the 2-D array is in its initial state without any prior input, a neuron responds to an input image without considering the state of its neighboring neurons. The neurons have a weighting factor, and neurons that respond to an image have a weighting vector that is closest to the input image vector. When a neuron in the 2-D neuron space responds to an image, the weight of the neighboring neurons is updated so that they also respond to that image. Neurons learn in this manner. As this process is repeated, the neatly arranged neurons in the 2-D space start to form clusters. Clusters of neighbors respond to similar images. The training allows the neural networks to be able to distinguish images even when the facial expressions are different due to mood or health, or when the subject's eyes are closed.
In this identification process, 10 to 20 photographs of the subject are first taken using a charge-coupled-device camera. The images are converted to bitmap files. The images vary in terms of recording angle and facial expression. When there are many registered subjects, all of the images are input and stored in the system to be used as training data. This training data serves as verification data of registered subjects. When verifying a registered subject, the identification process begins by reading the stored data of that subject.
This system uses a SOM for image recognition, and maps the proximity of input images versus stored images. It makes a judgment based on the relative distance. Not only can this method verify facial images for personal identification devices, but by recording data of a subject every day, it can also extract subtle connections between health and facial expression. Thus, the technology may be applied to personal health-monitoring devices.