DIGITAL IMAGING - Digital interferometer reconstructs 3-D images

Researchers at the University of Illinois Urbana-Champaign (Urbana, IL) have shown that visible-ray projection data obtained from digital analysis of interferometric data can be combined with tomographic algorithms to reconstruct three-dimensional (3-D) objects. The goal of their research was to find a digital replacement for analog processor technology that improves the analog algorithm.

Th 8 News 15 99

Researchers at the University of Illinois Urbana-Champaign (Urbana, IL) have shown that visible-ray projection data obtained from digital analysis of interferometric data can be combined with tomographic algorithms to reconstruct three-dimensional (3-D) objects. The goal of their research was to find a digital replacement for analog processor technology that improves the analog algorithm.1

Daniel Marks and colleagues at the university's Beckman Institute for Advanced Science and Technology worked with Rachael Brady of the university's National Center for Supercomputing to develop a lensless interferometric camera that has an infinite depth of focus. According to Marks, this improvement over analog-device capability allows making the geometrical optics assumption that the field propagates in nondiffracting rays. This assumption works in medical x-ray tomography where it is acceptable to resolve features that are large compared to the wavelength of the illuminating radiation.

X-ray tomography involves gathering a cone of projection data by placing a planar sensor on the opposite side of the object volume from a point source. The researchers report that a mathematically equivalent cone of data for a self-luminous or ambiently illuminated visible object is obtained by measuring the mutual intensity on a plane center on the equal but now virtual point source.


Two-arm Michelson-style interferometer, combined with digital computation, achieves infinite depth of field, which makes cone-beam tomography a flexible tool to synthesize a 3-D structure from coherence information. The coordinate system of the object corresponds to (xs, ys, zs), and the plane of the sensor array corresponds to the correlation space (Dx, Dy).
Click here to enlarge image

null

The researchers used a rotational shear interferometer to measure planes of interference data in parallel with an electronic sensor array in the output aperture. This two-arm Michelson-style device comprised a 50-cm-aperture beamsplitter and 5-cm folding mirrors, each composed of a pair of planar mirrors joined at right angles. The setup located the folding mirrors, each of which inverted the incident field across its axis, to produce an optical path difference of zero between the arms. The interference was separated from background terms by dithering the relative optical path delay with a translation stage on one arm (see figure on p. 52).

Other elements in the optical system were a mechanical shutter at the interferometer input, a 3-nm bandpass spectral filter centered on a wavelength of 633 nm at the output plane, and a 512 x 512-pixel back-illuminated charge-coupled-device detector array.

"Cone-beam tomography uses ray projections through vertices lying on a curve called the vertex path," said Marks. "Exact reconstruction of an object volume is possible if all planes through the object volume intersect that path.

"Our test object was placed 1.61 m from the interferometer sensor plane and illuminated by a white halogen lamp. We sampled the vertex path by rotating the object in front of the device, recording planes of coherence data from 128 vertex points equally spaced in angle over one revolution. At each point, we captured eight frames of 128 x 128 intensity samples. Fourier-transform analysis of the mutual intensity of the field of data was then used in reconstructing the 128 x 128 x 128 data volume. The resulting data cube, including the image, was 10.6 cm on a side with a resolution of 830 µm."

Marks and his colleagues believe these results show that neither point-by-point scanning, as in confocal microscopy or coherence tomography, nor heuristic analysis, as in computer vision, is necessary for 3-D reconstruction and that diffraction-limited 3-D optical reconstruction is possible with purely physical field analysis.

Paula M. Noaker

REFERENCES

  1. D. Marks et al., Science 284, 2164 (June 25, 1999).

OPTICS FABRICATION

null

Continuous polishing smooths Megajoule project

Critical optical components for the French Laser Megajoule (LMJ) project to create inertial confinement fusion are now in production. Since February 1999, sets of Nd:glass amplifier plates have been polished by a continuous polishing machine designed by REOSC (Massy, France) under contract to the Commissariat à l'Energie Atomique (CEA). The Megajoule project, located at the CEA-CESTA laboratory near Bordeaux, France, will contain 240 beamlines and is designed to produce 1.8-MJ pulses.

Click here to enlarge image

The large (760 x 400-mm) Nd:glass amplifier plates will be used to equip the eight-beam Light-Integrating Line, which is a prototype intended to validate the LMJ technology. Four of the eight beam lines will be operational in 2001, and target experiments will start shortly thereafter at the 60-kJ level. The LMJ itself will be completed in two phases, with the full 240 beamlines done at the end of the decade.

The high-quality Nd:glass was obtained by a continuous casting process developed jointly by Hoya (Tokyo, Japan) and Schott Glassworks (Mainz, Germany). The 4.2-m-diameter revolving plate on the polishing machine is one of the largest in the world (see photo). It is equipped with three satellites of 1.3-m diameter, each able to support two Nd:glass plates-thus, six plates can be simultaneously polished. A 2-m-diameter Zerodur conditioning disk ensures stability.

Roland Roux

More in Optics