WAVEFRONT ANALYSIS: Digital wavefront sensors boost resolution, sensing options

Analog optical-wavefront sensors are gradually being replaced by fully digital wavefront cameras, which offer the simultaneous capability to visualize video images and measure wavefronts at a resolution limited only by the camera's pixel size.

Oct 28th, 2009
Phaseview Table

Analog optical-wavefront sensors are gradually being replaced by fully digital wavefront cameras, which offer the simultaneous capability to visualize video images and measure wavefronts at a resolution limited only by the camera's pixel size.


Industrial wavefront sensing has its roots in astronomy when, in 1980, an array of microlenses called a Shack-Hartmann sensor was used to improve the transmission efficiency of wavefront sensing for low-light situations . The Shack-Hartmann technology reached maturity in 1990 when wavefront sensors first became commercially available.

The search for higher resolution in wavefront sensing has since resulted in the development of wavefront sensors based on multilateral shearing interferometry (introduced in 1995) and in wavefront-curvature sensors (introduced in 2000); both are systems that are based on 2-D diffraction gratings. These instruments are "analog," making use of a mix of more-or-less complex optical-hardware components and electronics to achieve better results for parameters such as dynamic range, sensitivity, and resolution. More recently, the advent of digital wavefront sensors has taken the technology even further, offering a number of advantages over conventional analog instruments (see table).

Conventional wavefront sensors

In a Shack-Hartmann sensor, a wavefront is decomposed into elementary subwavefronts by a grid of microlenses placed at the plane of wavefront analysis. Each of the microlenses creates a beam focused into a spot on a focal plane where a CCD camera is placed. The displacement of the spot with respect to a precalibrated position (corresponding to an undisturbed wavefront) is proportional to the local slope of the wavefront. Detecting the spots and integrating their displacements all across the focal plane in a very short time results in an instantaneous estimate of the wavefront shape.

A disadvantage of the Shack-Hartmann sensors is their limited resolution, particularly in detecting higher-order aberrations: to obtain a measurement of a wavefront at one point in the analysis plane, several square pixels are required. The first Shack-Hartmann sensors used were closed-loop adaptive-optics systems in astronomical telescopes, in which the speed of the measurement and its convergence to a perfect measurement over several iterations is of prime concern. Improving the resolution by reducing the area allocated to one spot leads to reducing the dynamic range of the sensor. Moreover, reducing the size of a microlens leads to increased crosstalk between the microlenses, as each microlens creates several diffraction orders with elevated sidelobes, making it difficult to identify the centers of each spot. Placing a high-frequency signature on each microlens so that each spot could be recognizable increases the manufacturing cost of the microlenses.

To improve on resolution, lateral-shearing interferometers feature a two-dimensional diffraction grating, whereby the incident beam is split into several sub-beams that interfere in the plane of the camera. The interference pattern so created is processed and the local slope of the wavefront at the analysis plane is measured. The resolution of the lateral shearing interferometer is several times finer than that for Shack-Hartmann sensors.

Lateral-shearing interferometers were first used in off-line deconvolution of images, where the quality of measurement was important. In lateral-shearing interferometers, the distance between the analysis plane and the detector plane can be modified to optimize the ratio between the dynamic range and the resolution for a given application. As a natural generalization of the Shack-Hartmann sensors, lateral-shearing interferometers share the same disadvantages as the former, however to a lesser extent, but at the cost of increased complexity of microlens manufacture and processing of individual interferograms.

A further drive to improve wavefront-sensing resolution resulted in curvature sensors, in which the second-order derivative of the wavefront is estimated by measuring the longitudinal variation of the wave's intensity. Real-time sensing of two or more intensity profiles requires the use of a parabolic-shaped diffraction grating with spatially varying period and pitch, and with images recorded on a CCD behind the grating. The resolution of curvature sensors is the highest among the three major types of analog wavefront sensors, as roughly two neighboring pixels yield one measurement point for the wavefront, whereas several pixels are required to yield one measurement point on the other two sensors. This resolution, however, comes at the expense of decreased allowed optical bandwidth for the wavefront (because the grating is designed for a specific central wavelength); increased light energy required to measure the wavefront (because the division in diffraction orders splits the light energy), and increased grating complexity and the resultant higher cost of manufacturing special diffracting elements.

Today, the conventional wavefront sensors are handicapped by use of complex hardware elements. To achieve the better resolution required by advances in industries where wavefront sensors are used as quality-control tools, they need to be equipped with still-more-complex hardware components, such as special microlenses containing high-frequency signatures or rotations for better localization of spots and for reducing the crosstalk between lenses, and complex multiple-order two-dimensional tri- or tetrahedral diffraction-lens arrays. Despite their use in a relatively broad range of optical frequencies, the use of these sensors for measurements in the UV, IR, or x-ray range relies upon hardware components specially designed for the given application or wavelength.

Digital wavefront sensors

Digital wavefront-sensing technology is associated with minimum use of hardware components and the intensive use of specialized algorithms. The technology is based on the prevalence of software as compared to the conventional use of hardware elements to achieve highest wavefront-sensing performances. As a result, digital wavefront cameras simultaneously provide complete information about an electromagnetic wave's two major characteristics--intensity and phase--complementing the usually available information about the wave's state of polarization and wavelength (see Fig. 1).

Click here to enlarge
FIGURE 1. The trend in wavefront-sensing technology is toward more wavefront points per 1000 pixels. (Courtesy of PhaseView)

Digital wavefront cameras rely upon measurements of the energy redistribution in 3-D space: as curvature sensors, they measure the variation of the wave's intensity in the optical-axis direction, while as Shack-Hartmann and lateral-shearing interferometers, they measure the redistribution of the wave's intensity in the transverse direction. The evolution of the beam through space is sensed by imaging it at different planes transverse to the optic axis onto a CCD camera, demultiplexing the images, and applying fast mathematical differential-equation solvers to obtain the beam's wavefront. Thus no hardware diffracting elements or microlenses are required, although at the cost of increased computational effort.

Today's digital wavefront cameras typically feature wavefront measurement sensitivity of l/100 λ over a dynamic range of several hundreds of wavelengths; a resolution of about 250,000 measurement points over an aperture diameter of 5 mm is achievable. With no use of hardware elements, digital wavefront sensors are well suited for measurements over a broad range of wavelengths, from IR to x-ray. In spite of the increased computational burden, a measurement frequency of 15 Hz or higher is achieved. It is easy to improve on these performances to address specific applications such as high-resolution imaging of retina-nerve terminations in ophthalmology.

Digital wavefront cameras can measure tilts and divergence and convergence of wavefronts, reducing the burden on the adaptive optics and thus increasing the capability of measuring very rapid turbulence phenomena. Digital sensors have high sensitivity in low-light situations, as well as a considerably relaxed "resolution versus dynamic range" tradeoff. Thus, the cameras can produce high-fidelity measurements of both low- and high-order aberrations of aspheric lenses used in DVD pick-up heads, cell phones, and intraocular lenses.

Another advantage of digital wavefront sensors is their capability of sensing broadband wavefronts. This capability offers substantial benefits in applications such as enhancing the contrast of white-light-microscopy imaging of biological cells without introducing toxic contrast agents, as well as in low-dose x-ray applications. Other advantages of digital vs. analog are portability, better reproducibility, and ease of setting up by a nonexpert (due to lack of specialized hardware).

Laser-beam profiling

Digital wavefront cameras can be used for measuring laser beam-propagation parameters and wavefronts in pulsed and continuous modes for lasers operating at visible to far-IR wavelengths. They provide parameters necessary to assess laser beam quality, such as beam propagation ratio (M²), width of the laser beam at its waist, laser-beam divergence angle and ellipticity, waist location, Rayleigh range, beam wavefront, and Zernike aberration modes.

Propagation parameters are measured by focusing the beam with a fixed-position lens of known focal length, and then measuring the characteristics of the artificially created beam waist and divergence. Measurement is based on the simultaneous acquisition of high-resolution images of intensity and wavefront. From the wavefront, the beam propagation parameters are obtained by straightforward but tedious computations. Beam shape and intensity distribution can be visualized at any distance; users can make beam adjustments in real time by visualizing the beam along the Z propagation axis. A graph of waist and other 2-D and 3-D intensity profiles can be easily extracted by selecting a Z position with a slider (see Fig. 2).

Click here to enlarge
FIGURE 2. Beam parameters (energy density, peak power, and so on) can be computed using digital beam-propagation algorithms, using data from simultaneous acquisition of beam intensity and wavefront in only one plane by digital wavefront cameras. (Courtesy of PhaseView)

Optics measurement

For optics testing, digital wavefront cameras can measure wavefront representations (measured, expected, and residual wavefronts), Zernike and Seidel polynomial coefficients from the measured wavefront; point-spread function, and modulation-transfer function (see Fig. 3). Aspheric lenses as well as concave and convex spherical lenses can be measured by adopting either a single-pass (transmission) or a double-pass (reflection) configuration using reference spheres. Lenses of diameters from 0.7 mm up to 50 mm can be measured at a resolution limited only by the dimensions of the digital camera.

Click here to enlarge
FIGURE 3. (was FIGURE 8) An optics-testing system based on digital wavefront cameras performs visual-defect analysis and aberration measurement at the full CCD resolution. (Courtesy of PhaseView)


Ophthalmologists measure aberrations in the human eye using devices called aberrometers, which pass eye-safe light through the eye and monitor the light as it exits the eye. Ophthalmic aberrometry relies on wavefront sensing to determine the way the eye resolves outside objects, to determine lens prescriptions and parameters for laser eye surgery, and to allow diffraction-limited imaging of the retina for purposes including the detection of age-related macular degeneration.

Conventional aberrometers are mostly based on analog (usually Shack-Hartmann) wavefront sensors. Because around 100 sensor pixels are typically required for one measurement point, wavefront analyzers currently available on the market can only provide low-resolution wavefront data (up to 1256 points for a 7-mm-diameter pupil) and, in the best case, low-resolution intensity distributions.

Digital wavefront cameras allow a high resolution (600 x 600 points for a 6 mm pupil) limited only by the size of one CCD pixel, while measuring both high-resolution wavefront and high-resolution intensity, making it an "all-in-one" instrument (retina imager plus aberrometer). Dealing with images directly on a camera rather than with measurement spots that are subjected to image processing (like spot-center detection) results in higher resolution, allowing the characterization of difficult-to-image eyes (see Fig. 4). These sensors are capable of measuring from -16 to +12 diopters of sphere, as well as up to 8 µm RMS of higher-order aberrations (compared to the conventional 1.6 µm).

Click here to enlarge
FIGURE 4. Images of the retina and the wavefront are obtained with a Shack-Hartmann sensor (left) and with a digital wavefront camera (right). (Courtesy of PhaseView)

Cell imaging

Cellular analysis is of major interest in various domains such as drug testing, cell pathology, biological scientific research, and the food industry. Volume and refractive index measurements of cells permit cell-growth quantification and protein concentration; however, cell imaging is a challenging task, as cells exhibit very weak contrast due to poor light absorption. Semitransparent or transparent biological specimens are phase-contrast objects in which phase shift under transmitted illumination is induced by differences of thickness and refractive index of the cellular components. Conventional methods for cellular imaging rely on specific labeling of biological structures and contrast enhancing techniques, including phase contrast and differential interference contrast (DIC). Labeling processing using fluorescent markers is a time-consuming task with undesirable side effects such as photobleaching and cell toxicity. Because they suffer from optical artifacts such as halo and shading effects, phase-contrast and DIC are mostly qualitative rather than quantitative tools. For accurate cellular-dynamics analysis, a combination of different techniques is often required.

In contrast, digital wavefront cameras allow free-space manipulation of cells and imaging without use of contrast agents; in addition, the cells can remain in situ in their growth medium (see Fig. 5).

Click here to enlarge
FIGURE 5. Cells are visualized with a digital wavefront camera. (Courtesy of PhaseView)

Other advantages include simultaneous acquisition of fluorescence, confocal, and TIRF images with phase contrast, and the acquisition of quantitative phase data for further post-processing. In cellular imaging, digital wavefront cameras serve both to enhance contrast in qualitative 3-D visualizations of cell structure, and to quantitatively measure dynamic morphology changes in cells.

IGOR LYUBOSHENKO is chief executive officer at PhaseView, 7, rue de la Croix Martre, 91120 Palaiseau, France; phaseview.com; email: igor.lyuboshenko@phaseview.com.

More in Detectors & Imaging