Neural network counts vehicle occupants

Freeway congestion and pollution levels continue to increase in the world�s large cities; one possible solution is the addition of high-occupancy-vehicle (HOV) lanes to alleviate traffic congestion and encourage carpooling.

Oct 1st, 2004
Th 160739

Freeway congestion and pollution levels continue to increase in the world’s large cities; one possible solution is the addition of high-occupancy-vehicle (HOV) lanes to alleviate traffic congestion and encourage carpooling. Sometimes, however, lone drivers choose to travel in these lanes despite the requirement for two or more occupants in the vehicle. Philip Birch and his colleagues at the University of Sussex School of Engineering and Information Technology (Brighton, England) are attempting to automate the vehicle-occupancy monitoring process using face detection and image recognition to eliminate the “honor system” for HOV traffic lanes.1

Current systems to enforce HOV regulations are limited to using humans to check vehicles manually, or through use of videotape to semi-automate the process. Unfortunately, both of these methods require human intervention and routinely result in “missed” occupants in the range of 21% to 51%. While a system may never be devised to entirely eliminate human review, the team’s work is intended to reduce the workload on human operators by filtering out as many nonviolators as possible.

Subtleties in camera viewing angle, position of occupants within a vehicle, and changing lighting conditions throughout the day are obstacles to simple face-detection algorithms. To minimize these variables, the team concentrated on first identifying the windshield area of the vehicle before attempting face recognition.

Taking advantage of ­windshield tint

Segmentation of the image to identify the windshield area was simplified by noting that the windshield of a car or truck adds a blue tint to the image behind it. By converting RGB (red, green, blue) color space-with no invariance to scene brightness-to HSV (hue, saturation, value) color space, the value or intensity component can be dropped. Setting hard limits in H and S can then isolate the blue-tinted areas of the windshield. Factoring out blue objects outside the windshield and non-blue objects within it, as well as additional post-processing of the image to eliminate large gaps including stickers and rear-view mirrors, resulted in successful identification of the windshield region for 80% of the 97 cars and trucks tested.

The next challenge was to count the number of people within the windshield area. After assessing several state-of-the-art face-detection techniques, including principal-component analysis (eigenfaces), support-vector machines (SVMs), and neural networks, the team settled on neural networks. While the eigenface and SVM methods missed more faces (most likely due to variability in the lighting), the neural network tended to produce more false-positive results. However, the neural network has the advantage of being trainable on false results; false positives can be collected manually and fed back into the neural network during the next training cycle. A higher number of false positives is beneficial in this case as it is better to err on the positive side (two occupants) rather than to flag cars incorrectly for lane violations (only one occupant).

Counting faces

The neural network used for the HOV face-detection task was a feedforward multilayer perceptron-a type of neural network that learns “concepts” by responding to inputs that are presented to it. By studying these inputs and applying weights and biases to the data, the neural network learns from the input data and can converge on a solution given a finite amount of time. For the detection of faces within the windshield area under study, training data for the neural network consisted of 1000 face images and 2000 nonface images, with each image being 19 × 19 pixels across. Using these image sets, the output of the neural network reported 1 for a face or 0 for a non-face. If the number of faces within the scene added to none or more than two, the image was flagged for manual inspection and the false results were fed back into the neural network for further processing. The output from the neural network produced a series of ­adjacent pixels in predefined locations, allowing faces to be identified within the windshield area (see figure).


An automated ­vehicle-occupancy monitoring system achieves a 68% success rate in detection of faces by using an image-­segmentation process to first identify the windshield area of a vehicle traveling in an HOV lane. These windshield images are then post-processed with a neural network face-detection algorithm (top to bottom); the white spots indicate the discovery of a face.
Click here to enlarge image

After processing the 97 test images from cars in HOV lanes, the neural-network technique successfully identified 68% of the faces. Despite an additional 42 false-positive identifications, the U.K. Department for Transport will fund the work for an additional year to build a roadside system and retrain the neural network for that particular camera system.

REFERENCE

1. P. Birch et al., Optical Engineering 43, 8 (August 2004).

More in Detectors & Imaging