Expanding horizons

Novel lenses and optics make light-field cameras viable

Jun 1st, 2008
Th Cholton

Novel lenses and optics make light-field cameras viable

Conard Holton

High-contrast, sharply focused images are critical in the development of every imaging system. To obtain these images, many system designers use fixed-focused lenses that feature fewer moving parts and less chromatic aberration than their zoom-lens counterparts. And because these lenses usually have a larger maximum aperture than zoom lenses, they can be used in lower-light applications. The designers must ensure that objects being inspected are in focus before any image-processing functions are performed on the images.

To overcome this limitation, researchers have been developing new types of cameras based on novel light-field designs. “Currently, digital consumer cameras make decisions about what the correct focus should be before the picture is taken, which engineering-wise can be very difficult,” says Ren Ng, a computer-science graduate student in the lab of Pat Hanrahan, the Canon USA professor in the school of engineering at Stanford University (Stanford, CA). “However,” he adds, “with a light-field camera, one exposure can be taken, more information about the scene captured, and focusing decisions made after the image is taken.”

In a conventional camera, rays of light pass through the camera’s main lens and converge onto a photosensor behind it. Each point on the resulting 2-D image is the sum of all the light rays striking that location. Light-field cameras—often referred to as plenoptic cameras—can use several methods to separate these light rays into subimages that are then captured by the image sensor.

Because multiple images of a scene are taken, the traditional relationship between aperture size and depth of field of traditional camera and lens design is decoupled. But, since multiple images of these scenes are taken at once, custom postprocessing software must be used to compute sharp images focused at different depths.


One of the earliest prototypes of a light-field camera was by Jason Wang at Massachusetts Institute of Technology (Cambridge, MA) in 1999. It was based on an adapted UMAX Astra 2000P flatbed scanner with a 2-D assembly of 88 lenses in an 8 × 11 column grid. Using TWAIN software supplied by the scanner manufacturer, images were captured into a host computer. By taking a series of images from different viewpoints on the image plane, new views, not just those restricted to the original plane, could be generated.

Engineers at Point Grey Research (Vancouver, BC, Canada) have developed a light-field camera with a PCI Express interface, the ProFUSION 25, which uses multiple low-cost MT9V022 wide-VGA CMOS sensors from Micron Technology (Boise, ID). Like Wang’s prototype, the ProFUSION uses ray-tracing software to compute images at different depths.

Adobe Systems (San Jose, CA) uses a design that makes it possible to capture all of the light-field information into one single image of high resolution. Rather than use multiple cameras, the camera uses a large main lens and an array of negative lenses placed behind it. This produces virtual images that are captured on the other side of the main lens and then captured by a camera using a 16-megapixel sensor from Eastman Kodak (Rochester, NY). This produces refocused output images of 700 × 700 pixels.

While these techniques use multiple lens elements to achieve a light-field camera design, researchers at Stanford University and Mitsubishi Electric Research Labs (MERL; Cambridge, MA), are looking at ways to miniaturize the designs. Rather than use large multiple lens elements, Ng and his colleagues at Stanford have developed a microlens array built by Adaptive Optics Associates (Cambridge, MA) that has 296 × 296 lenslets that are 125 µm wide. This microlens array is mounted on a Kodak KAF-16802CE color sensor, which then forms the digital back of a camera. MERL has developed a method that exploits an optical version of heterodyning in which an optical 2-D mask modulates the angular variations in the light field frequencies to higher frequencies that are detected by a 2-D photosensor.

Light-field cameras extend the depth of field while maintaining a wide aperture and therefore may provide significant benefits to several industries, including security surveillance. Within the next few years, expect to see them emerge into mainstream consumer and industrial camera markets.

Click here to enlarge image

CONARD HOLTON is editor in chief of Vision Systems Design; e-mail: cholton@pennwell.com.

More in Optics
Lasers & Sources
LASER World of PHOTONICS 2019: Inspiring