Expanding horizons

June 1, 2008
Novel lenses and optics make light-field cameras viable
Novel lenses and optics make light-field cameras viable

Conard Holton

High-contrast, sharply focused images are critical in the development of every imaging system. To obtain these images, many system designers use fixed-focused lenses that feature fewer moving parts and less chromatic aberration than their zoom-lens counterparts. And because these lenses usually have a larger maximum aperture than zoom lenses, they can be used in lower-light applications. The designers must ensure that objects being inspected are in focus before any image-processing functions are performed on the images.

To overcome this limitation, researchers have been developing new types of cameras based on novel light-field designs. “Currently, digital consumer cameras make decisions about what the correct focus should be before the picture is taken, which engineering-wise can be very difficult,” says Ren Ng, a computer-science graduate student in the lab of Pat Hanrahan, the Canon USA professor in the school of engineering at Stanford University (Stanford, CA). “However,” he adds, “with a light-field camera, one exposure can be taken, more information about the scene captured, and focusing decisions made after the image is taken.”

In a conventional camera, rays of light pass through the camera’s main lens and converge onto a photosensor behind it. Each point on the resulting 2-D image is the sum of all the light rays striking that location. Light-field cameras—often referred to as plenoptic cameras—can use several methods to separate these light rays into subimages that are then captured by the image sensor.

Because multiple images of a scene are taken, the traditional relationship between aperture size and depth of field of traditional camera and lens design is decoupled. But, since multiple images of these scenes are taken at once, custom postprocessing software must be used to compute sharp images focused at different depths.

Prototypical

One of the earliest prototypes of a light-field camera was by Jason Wang at Massachusetts Institute of Technology (Cambridge, MA) in 1999. It was based on an adapted UMAX Astra 2000P flatbed scanner with a 2-D assembly of 88 lenses in an 8 × 11 column grid. Using TWAIN software supplied by the scanner manufacturer, images were captured into a host computer. By taking a series of images from different viewpoints on the image plane, new views, not just those restricted to the original plane, could be generated.

Engineers at Point Grey Research (Vancouver, BC, Canada) have developed a light-field camera with a PCI Express interface, the ProFUSION 25, which uses multiple low-cost MT9V022 wide-VGA CMOS sensors from Micron Technology (Boise, ID). Like Wang’s prototype, the ProFUSION uses ray-tracing software to compute images at different depths.

Adobe Systems (San Jose, CA) uses a design that makes it possible to capture all of the light-field information into one single image of high resolution. Rather than use multiple cameras, the camera uses a large main lens and an array of negative lenses placed behind it. This produces virtual images that are captured on the other side of the main lens and then captured by a camera using a 16-megapixel sensor from Eastman Kodak (Rochester, NY). This produces refocused output images of 700 × 700 pixels.

While these techniques use multiple lens elements to achieve a light-field camera design, researchers at Stanford University and Mitsubishi Electric Research Labs (MERL; Cambridge, MA), are looking at ways to miniaturize the designs. Rather than use large multiple lens elements, Ng and his colleagues at Stanford have developed a microlens array built by Adaptive Optics Associates (Cambridge, MA) that has 296 × 296 lenslets that are 125 µm wide. This microlens array is mounted on a Kodak KAF-16802CE color sensor, which then forms the digital back of a camera. MERL has developed a method that exploits an optical version of heterodyning in which an optical 2-D mask modulates the angular variations in the light field frequencies to higher frequencies that are detected by a 2-D photosensor.

Light-field cameras extend the depth of field while maintaining a wide aperture and therefore may provide significant benefits to several industries, including security surveillance. Within the next few years, expect to see them emerge into mainstream consumer and industrial camera markets.

Click here to enlarge image

CONARD HOLTON is editor in chief of Vision Systems Design; e-mail: [email protected].

Sponsored Recommendations

Request a quote: Micro 3D Printed Part or microArch micro-precision 3D printers

April 11, 2024
See the results for yourself! We'll print a benchmark part so that you can assess our quality. Just send us your file and we'll get to work.

Request a Micro 3D Printed Benchmark Part: Send us your file.

April 11, 2024
See the results for yourself! We'll print a benchmark part so that you can assess our quality. Just send us your file and we'll get to work.

Request a free Micro 3D Printed sample part

April 11, 2024
The best way to understand the part quality we can achieve is by seeing it first-hand. Request a free 3D printed high-precision sample part.

How to Tune Servo Systems: The Basics

April 10, 2024
Learn how to tune a servo system using frequency-based tools to meet system specifications by watching our webinar!

Voice your opinion!

To join the conversation, and become an exclusive member of Laser Focus World, create an account today!