Automotive lidar gains multispectral capabilities

Feb. 4, 2022
Adding more lidar wavelengths can provide richer details about the environment around the vehicle.

Over the past 20 years, 3D lidar (also known as LiDAR) has quite literally added an extra dimension to traditional imaging with cameras. Originally developed for survey work, 3D lidar can now be found in countless applications, the most notable of which is assisting in partial or full autonomous driving.

Although there are many different lidar architectures available that fall into the general categories of scanning or flash, until recently they have all had one feature in common: They are monochromatic, analogous to a black and white camera. Just as a color camera can provide more information, adding more lidar wavelengths can provide richer details about the environment around the vehicle.

By going beyond the three red-green-blue (RGB) wavelengths of a typical color camera to several or many wavelengths, active multi- or hyperspectral sensing can be achieved. As is the case with passive spectral sensing, this allows identification of the materials from which objects are made. For autonomous systems, this can offer dramatic improvements in safety. Examples include detection of road conditions (for example, ice or water) at significant distances in front of the vehicle and the ability to distinguish people from similarly sized objects.

Meeting the challenges

Despite the obvious advantages provided by a multispectral lidar, the development of a commercial system that could potentially be used in automotive applications did not come quickly or easily. This is due primarily to the requirement of real-time sensing (at least several frames per second) as well as the strict size, weight, power, and cost (SWAP-c) constraints for a mass-market automotive device. In addition, a multispectral lidar should offer range and resolution performance comparable to state-of-the-art monochromatic solutions, as well as tolerate the shock, vibration, and temperature ranges found on a moving vehicle.

Early examples of multiwavelength lidar used tunable laser sources and were primarily focused on atmospheric studies. Even several decades later, however, tunable sources such as optical parametric oscillators cannot meet many of the requirements previously discussed. The solution was a laser source that can generate all the desired wavelengths simultaneously. The first demonstrations of a compact multispectral lidar device based on this type of source were done by MIT Lincoln Laboratory (Lexington, MA) in the late 1990s.1,2 In these proof-of-principle experiments, a spectrum was sequentially acquired at each point using many laser pulses and an integrating charge-coupled device (CCD) spectrometer. This architecture enables acquiring a single frame at low resolution within minutes.

About a decade later, in 2012, a 3D lidar was demonstrated that could acquire eight wavelengths simultaneously from a broadband source using a dispersive-element, avalanche photodiode array and multichannel high-speed digitizer.3 The received laser energy was divided by 8, with losses in the receiver path, so the range was limited to 20 m or less. In addition, the system contained more than $100,000 of components, which made the design unsuitable for widespread application. During the next several years, more than 100 scientific papers were published using these types of devices—primarily on the different measurements that were possible. But the basic architecture and subsequent SWAP-c issues remained more or less unchanged.

In 2018, Outsight began developing a new type of multispectral lidar (see Fig. 1) from the ground up with a specific focus on mainstream applications of long-range lidar (hundreds of meters). From a light source specification perspective, this means a pulse repetition frequency of 500 kHz or higher, a pulse width of a few nanoseconds, a pulse energy of several microjoules, and a broad spectrum covering the spectral region of interest. The high-repetition frequency is required to provide high-resolution point clouds at video frame rates; the short pulse width provides good distance resolution; and the pulse energy is required to achieve long-range operation with a moderately sized scanner aperture. It should also be extremely compact and manufacturable at low cost in volume.

By this time, eye-safe fiber lasers at 1550 nm, based on a diode laser seed source and several amplification stages, were available. The output could then be broadened in a suitable nonlinear fiber and all optical specifications could be met. But the complexity and cost of these systems remain obstacles to their use.

Our laser is composed of only two key components—a specially designed microchip solid-state laser oscillator and a single amplifier that also provides spectral broadening (see Fig. 2). The oscillator is passively Q-switched by a semiconductor-saturable absorber (SESAM) to provide 1 ns pulses with an energy of ~100 nJ and a repetition rate of 500 kHz. This energy is roughly two orders of magnitude higher than that available from a diode-based seed laser, so only a single amplification and spectral broadening stage is needed to generate the desired output specifications of several microjoules per pulse covering the 1400–1700 nm range. In addition, our seed source does not require any high-speed electronics to provide the short pulses and high-repetition rate—aiding in the reduction of complexity and costs.

Development of a compact, low-cost laser source is only part of the challenge. Multispectral lidar should perform the more traditional lidar task of generating monochromatic point clouds at least as well as existing systems. Specifically, it should provide an equivalent maximum range at similar overall system efficiency. When looking at previous multispectral lidar architectures, it appears this should be impossible. All earlier designs either used a dispersive element and multiple detectors or a bandpass filtering device, such as an acousto-optic filter, to select a single wavelength at a time.

In both cases, the total laser energy is divided by the number of spectral bands with the range reduced accordingly. To overcome this seemingly fundamental issue, we came up with a new approach: “inverse multispectral.” Rather than select individual spectral bands, we remove them by notch filtering. In the case of five spectral bands, instead of 20% of the laser energy available for ranging, we have 80%. By making measurements at each band, as well as one without a band removed, we can recover exactly the same multispectral information as in the earlier architectures.

The final requirement is that the lidar can acquire point cloud frames at video rates to provide sufficient information for the control of an autonomous vehicle. In principle, the multiple-detector approach could achieve the necessary speed, but it suffers from the division-of-energy issue discussed previously, as well as significant cost and complexity. Instead, the wavelengths or lack of a wavelength should be measured sequentially. The first option would be to measure each wavelength at a fixed-scanner angle and then move to the next. But at a laser repetition rate of 500 kHz, this would require an extremely fast notch- filtering mechanism and division of the frame rate by the number of wavelengths.

Instead, we scan a complete frame at one wavelength and switch wavelengths between frames. By combining this data with our specialized simultaneous localization and mapping (SLAM) software, we can build up the multispectral information on a frame-by-frame basis. If we are scanning at a 20 Hz frame rate, we get the traditional 3D data at the full 20 Hz, and multispectral information at a frame rate divided by the number of wavelengths (see Fig. 3).

Autonomous driving is only one of many uses for multispectral 3D lidar. Other applications that can benefit from a compact, low-cost multispectral lidar are developing in fields as diverse as mining, agriculture, smart infrastructure, and more.

ACKNOWLEDGEMENT

This work was supported by the EIC Horizon 2020 program under grant agreement number 101010266 (Green LiDAR project).

REFERENCES

1. S. Buchter and J. J. Zayhowski, MIT Lincoln Laboratory Solid State Research, 1, 1–3 (1999).

2. B. Johnson et al., Proc. SPIE, 3710, 144–153 (1999).

3. T. Hakala et al., Opt. Express, 20, 7, 7119–7127 (2012).

About the Author

Scott Buchter | Co-founder, Outsight and UVC Photonics

Scott Buchter is co-founder of Outsight (Paris, France; www.outsight.ai) and UVC Photonics (Helsinki, Finland).

About the Author

Nadine Buard | Senior Director, Outsight

Nadine Buard is senior director of new products and integration at Outsight (Paris, France).

Sponsored Recommendations

Request a free Micro 3D Printed sample part

April 11, 2024
The best way to understand the part quality we can achieve is by seeing it first-hand. Request a free 3D printed high-precision sample part.

How to Tune Servo Systems: The Basics

April 10, 2024
Learn how to tune a servo system using frequency-based tools to meet system specifications by watching our webinar!

Motion Scan and Data Collection Methods for Electro-Optic System Testing

April 10, 2024
Learn how different scanning patterns and approaches can be used in measuring an electro-optic sensor performance, by reading our whitepaper here!

How Precision Motion Systems are Shaping the Future of Semiconductor Manufacturing

March 28, 2024
This article highlights the pivotal role precision motion systems play in supporting the latest semiconductor manufacturing trends.

Voice your opinion!

To join the conversation, and become an exclusive member of Laser Focus World, create an account today!