Rangefinder allows digital camera system to render real-world scenes

June 1, 2001
A prototype digitizing system consisting of a laser rangefinder and a high-resolution digital camera is breaking new ground in three-dimensional (3-D) scanning and scene reconstruction. A time-of-flight modulated-beam laser rangefinder rapidly produces accurate panoramic 3-D point sets, which are then combined with color information...

Lars S. Nyland and Anselmo A. Lastra

A prototype digitizing system consisting of a laser rangefinder and a high-resolution digital camera is breaking new ground in three-dimensional (3-D) scanning and scene reconstruction. A time-of-flight modulated-beam laser rangefinder rapidly produces accurate panoramic 3-D point sets, which are then combined with color information captured by a high-resolution digital camera to produce extraordinarily detailed and accurate models of large 3-D scenes and objects. Such real-world scene data is useful in numerous applications, including accident and crime-scene capture and reconstruction, archaeology and historic preservation, engineering verification, renovation, and surveillance.

Data acquisition
To capture range data, the system uses a line-scanning rangefinder, with an effective range of 0 to 50 ft., that is capable of taking up to 50,000 samples per second. Performance of the device improves as the sampling rate is reduced; the same is true if the maximum range is limited. The system typically is run no faster than 25,000 samples per second, with the maximum distance limited to 24 to 30 ft. A PC collects the data, which consists of range, signal strength, motor position and other information.

FIGURE 1. Data from a single point-of-view of Thomas Jefferson's library (Monticello, VA) were acquired with a DeltaSphere-3000 scene digitizer. On the left side, 1/100 of the points appear as a triangle mesh, blended into the color samples rendered on the right (still only 1/4 of the data). Shadows indicate the view has moved substantially from the acquisition location. (Photo courtesy of University Of North Carolina)

Click here to enlarge image

A scanning mirror with a 4,096-position shaft encoder attached to its drive motor is coupled with the rangefinder. Supporting the rangefinder and scanning mirror is a panning motor that allows 360° panoramas to be acquired. A 35-mm digital camera provides color data, and a 14-mm flat-projection lens acquires a wide field of view on the smaller-than-film charged-coupled device (CCD). The field of view is nearly the same as a 24-mm lens on 35-mm film, and is approximately 77° by 55°.

Color and range data are acquired sequentially from the same position by physically replacing the rangefinder with the camera on the panning motor. After collection of the range data, a panoramic set of photographs is shot in 12 positions spaced 30° apart (see Fig. 1).

The camera is calibrated using public-domain software to determine intrinsic parameters. These parameters are then used to undistort the images, using bicubic resampling to create images that match a pinhole model. This is the only time the color data is resampled, in an effort to keep it as close to the original as possible.

Merging the data
The original goal of the system was to augment photographs with range data. To meet this goal, range and color data are fused together. Panoramas taken from different locations must also be registered. Simplification is usually required, because the number of samples is too large to render interactively (10 to 20 million samples per panorama).

FIGURE 2. The DeltaSphere-3000 system built by 3rdTech is a commercial version of the described system that weighs about 22 lb. and mounts on photographic or surveying tripods. (Photo courtesy of 3RDTECH)

Click here to enlarge image

When multiple scans are made from different locations of a single scene, the offset (and rotation) between the scans must be determined. Post-processing alignment can be done using features in the scene. A simple, software-based method of registration allows the user to move and rotate scenes with respect to one another—showing the view in three dimensions.

Fusion is the process of matching the range data with the color images to determine the depth of every pixel in the color images. One method relies on matching automatically detected edges in the different images to find the best alignment. One difficulty with edge-detection is that CCD images are typically noisy, especially in the blue channel, creating many false edges in the color images.

There are two approaches to alleviating edge-detection problems, both of which are used. The first method is to take multiple color images at the same exposure and blend them together, reducing random noise. The second method uses a special kind of "blurring" called variable conductance diffusion, in which smooth areas are treated as conductors and edges are treated as insulators to provide edge-detection algorithms with suitable input for finding actual edges in the scene. The edges in the range images are easier to detect, because there is little noise involved and the edges stand out clearly. Edges in the rangefinder reflectance images also are used, as these have more in common with the color images.

Data produced
The output of the alignment and fusion process is a set of multilayer IBR-Tiff images that have the undistorted color as the first layer, and the range (usually the disparity) as the second layer, followed by a 4 x 3 matrix as the third "layer" that relays viewpoint and orientation information. Subsequent processing is usually performed to eliminate areas that are covered by more than one view. The current choice is to keep the data from the view that is closest to orthogonal from the sampled area. The result is a file suitable for rendering. Future plans include using all of the data to partially reconstruct view-dependent lighting.

A short-term research goal is to explore drastic simplification and efficient representation of the models created, because a single scene can swamp a computer system—not only in terms of rendering, but in memory and disk storage as well. Although work remains to improve the system, the ability to create real-world scenes is fueling research in areas such as representation, registration, data fusion, polygonization, rendering, simplification, and reillumination.

A compact, portable version of this system has been commercialized by 3rdTech (Chapel Hill, NC) as the DeltaSphere-3000 (see Fig. 2). The device uses a class IIIa laser and acquires panoramic range and color data at 5 to 20 samples per degree. For a medium-density 360° panorama, the scanning time is about 20 minutes.

ACKNOWLEDGMENT
This work would not have been possible without the diligent help of our research assistants David McAllister, Voicu Popescu, and Chris McCue. We would also like to thank the users of our data for showing our work so well.

LARS NYLAND and ANSELMO LASTRA are with the University of North Carolina Computer Science Department, CB #3175, Chapel Hill, NC 27599-3175; e-mail: [email protected].

Sponsored Recommendations

Working with Optical Density

Feb. 26, 2025
Optical Density, or OD, is a convenient tool used to describe the transmission of light through a highly blocking optical filter.

Custom-Engineered Optical Solutions for Your Application

Feb. 26, 2025
Explore the newest and most widely used applications of Semrock optical filters.

Linear Stages & Rotary Stages for High Precision Automation & Motion Control

Feb. 13, 2025
Motorized Linear Translation Stages & Rotary Precision Positioning Stages for High Performance Automation & Motion Control | PI USA

Motion Controllers for Precision Positioning and Automation

Feb. 13, 2025
PI manufactures a range of precision motion controllers and drivers for positioning systems, including stepper motors, brushless motors, and servo motors.

Voice your opinion!

To join the conversation, and become an exclusive member of Laser Focus World, create an account today!