MIT researchers demonstrate single-photon LIDAR-like system

Dec. 10, 2013
Cambridge, MA--MIT Research Laboratory of Electronics researchers describe a new LIDAR-like system that can gauge depth when only a single photon is detected from each location.

Cambridge, MA--Per the publication in Science, Massachusetts Institute of Technology (MIT) Research Laboratory of Electronics (RLE) researchers describe a new light detection and ranging (LIDAR)-like system that can gauge depth when only a single photon is detected from each location. Since a conventional LIDAR system would require about 100 times as many photons to make depth estimates of similar accuracy under comparable conditions, the new ultrasensitive system could yield substantial savings in energy and timewhich are at a premium, for example, in autonomous vehicles trying to avoid collisions.

The system can also use the same reflected photons to produce images of a quality that a conventional imaging system would require 900 times as much light to matchand it works much more reliably than LIDAR in bright sunlight, when ambient light can yield misleading readings. All the hardware it requires can already be found in commercial LIDAR systems; the new system just deploys that hardware in a manner more in tune with the physics of low light-level imaging and natural scenes.

As Ahmed Kirmani, a graduate student in MIT's Department of Electrical Engineering and Computer Science and lead author on the new paper, explains, the very idea of forming an image with only a single photon detected at each pixel location is counterintuitive. "The way a camera senses images is through different numbers of detected photons at different pixels," Kirmani says. "Darker regions would have fewer photons, and therefore accumulate less charge in the detector, while brighter regions would reflect more light and lead to more detected photons and more charge accumulation."

In a conventional LIDAR system, the laser fires pulses of light toward a sequence of discrete positions, which collectively form a grid; each location in the grid corresponds to a pixel in the final image. The technique, known as raster scanning, is how old cathode-ray-tube televisions produced images, illuminating one phosphor dot on the screen at a time.

The laser will generally fire a large number of times at each grid position, until it gets consistent enough measurements between the times at which pulses of light are emitted and reflected photons are detected that it can rule out the misleading signals produced by stray photons. The MIT researchers’ system, by contrast, fires repeated bursts of light from each position in the grid only until it detects a single reflected photon; then it moves on to the next position.

A highly reflective surfaceone that would show up as light rather than dark in a conventional imageshould yield a detected photon after fewer bursts than a less-reflective surface would. So the MIT researchers’ system produces an initial, provisional map of the scene based simply on the number of times the laser has to fire to get a photon back.

Simply filtering out noise according to the Poisson statistics would produce an image that would probably be intelligible to a human observer. But the MIT researchers’ system does something cleverer: It guides the filtering process by assuming that adjacent pixels will, more often than not, have similar reflective properties and will occur at approximately the same depth. That assumption enables the system to filter out noise in a more principled way.

Kirmani developed the computational imager together with his advisor, Vivek Goyal, a research scientist in RLE, and other members of Goyal’s Signal Transformation and Information Representation Group. Researchers in the Optical and Quantum Communications Group, which is led by Jeffrey Shapiro, the Julius A. Stratton Professor of Electrical Engineering, and senior research scientist Franco Wong, ran the experiments reported in the Science paper, which contrasted the new system’s performance with that of a conventional LIDAR system.

SOURCE: MIT; http://web.mit.edu/newsoffice/2013/3-d-images-with-one-photon-per-pixel-1128.html

About the Author

Gail Overton | Senior Editor (2004-2020)

Gail has more than 30 years of engineering, marketing, product management, and editorial experience in the photonics and optical communications industry. Before joining the staff at Laser Focus World in 2004, she held many product management and product marketing roles in the fiber-optics industry, most notably at Hughes (El Segundo, CA), GTE Labs (Waltham, MA), Corning (Corning, NY), Photon Kinetics (Beaverton, OR), and Newport Corporation (Irvine, CA). During her marketing career, Gail published articles in WDM Solutions and Sensors magazine and traveled internationally to conduct product and sales training. Gail received her BS degree in physics, with an emphasis in optics, from San Diego State University in San Diego, CA in May 1986.

Voice your opinion!

To join the conversation, and become an exclusive member of Laser Focus World, create an account today!