Optical information processing awaits optoelectronic devices

Optical information processing can perform a myriad of signal processing operations, primarily because of its complex amplitude processing capability.

Sep 1st, 2002

Optical information processing can perform a myriad of signal processing operations, primarily because of its complex amplitude processing capability. Coherent illumination enables one- or two-dimensional operations using simple linear operators. Translation of these capabilities into real-world applications, however, still awaits development of appropriate electro-optic interface devices.

The roots of optical information processing can be traced back to Ernst Abbe's work in the 1870s when he developed a method that led to the discovery of spatial filtering to improve the resolution of microscopes. Mention should also be made of Herbert E. Ives' work on color-image retrieval in 1906. Optical information processing was not actually appreciated however, until the work of Louis J. Cutrona and his colleagues on optical data processing in 1960, and later by the work of Anthony Vander Lugt on complex spatial-filter synthesis in 1964. Since then, techniques, architectures, and algorithms have been developed to construct efficient optical systems for optical information processing (see "The legacy of optical information processing dates back to the 1800s," p. 72).

Of course, pure optical processors have severe drawbacks that make certain data processing operations difficult or even impossible to carry out.

Therefore, while optical processors can be designed for specific purposes, it would be difficult to make them programmable like electronic digital computers. Also, unlike electronic digital computers, optical processors cannot make decisions. Thus, combining optical and electronic processors becomes the obvious means of applying the rapid processing and parallelism of optics to a wider range of data processing applications.

Two architectures

Two well-known optical architectures are used for information-processing implementation: the Fourier-domain (filter) processor (FDP) and the joint-transform (spatial-domain filter) processor (JTP). One of the major distinctions between FDP and JTP is that FDP uses a Fourier-domain filter (FDF), while JTP uses a spatial-domain filter (SDF). Also, FDF is input signal independent, while joint transform power spectrum (JTPS) is input signal dependent, which produces poor diffraction efficiency under intense bright-background or multitarget input scene. Nevertheless, these shortcomings can be easily mitigated by the removal of the zero-order diffraction with computer intervention. The major advantages of the JTP are flexibility in implementation and robustness to environmental perturbation.

Because classical spatial-matched filters are sensitive to rotational and scale variances, a score of approaches to developing composite-distortion-invariant filters have been reported. Among them, the synthetic discriminant-function filter (SDF) has been one of the most important. The SDF can be understood as a linear combination of classical matched filters, and numerous techniques have been proposed to improve the performance of the classical SDF filter. For instance, higher noise tolerance can be achieved if the variance of the SDF filter is minimized, and sharper correlation peaks can be obtained if the average correlation energy of the SDF filter is minimized. Beyond these two relatively simple techniques, simulated annealing-algorithm (SAA) filters have also been used for pattern recognition, in which a spatial-domain bipolar can be directly implemented on an input phase-modulating spatial-light modulator (SLM) in a JTP.

Although SLMs can be used to display complex spatial filters, they are typically low-resolution and low-capacity devices. On the other hand, photo-refractive materials offer real-time recording, high resolution, and massive storage capacity. However, thick photo-refractive material has limited shift invariance due to Bragg diffraction. Thinner crystals can be used, but the diffraction efficiency and storage capacity will be substantially reduced. Nevertheless, high storage capacity, high diffraction efficiency, and large shift invariance can be achieved by using reflection-type wavelength-multiplexed PR matched filters.

Two approaches

Two basic approaches can be used to optimize information processing. One is to maximize the output signal-to-noise ratio that leads to matched filtering, and the other is to minimize the mean-square errors (MSE) that lead to the Wiener-Hopf solution. In other words, if we have a prior knowledge of the target (or signal), then by maximizing the output signal-to-noise ratio (under the assumption of additive white-Gaussian noise with zero mean), then the solution for the filter synthesis is the result of a matched filter. The matched filter gives rise to autocorrelation detection, which provides the highest correlation peak intensity detection. This approach has been widely used in radar systems as well as in optical pattern recognition and detection.

On the other hand, if the incoming signal (or target) is unknown, then restoration of the incoming signal (or target) is essential for the receiver. For optimal restoration or reconstruction of the detected target (or signal), an optimal filter can be realized by minimizing the MSE of the actual output signal (such as the restored target) with respect to the desired output signal, under the constraint of the physical realizable condition. The end result of the minimized MSE will generally lead to an open-ended Wiener-Hopf solution. However, under specific conditions, a unique solution can be obtained. Nevertheless, either case for obtaining optimum information processing is by no means free. In most cases it will require excessive amounts of entropy to accomplish it. A fundamental question is whether we can afford it? The answer is that sometimes we can afford it and sometimes we cannot.

There is, however, a common starting point between computer imaging processing and optical information processing—and that is spatial-domain processing. In other words, the temporal signal has to be converted to a spatial signal before the optical information processing can begin. One example is synthesis-aperture-radar (SAR) processing, in which a temporal microwave signal is converted into a spatially scanned format for optical processing. Because optical information processing offers high resolution and parallel processing capacity, it was initially successful for SAR processing until the recent emergence of more-efficient electronic processors, which have totally replaced conventional optical processing.

Electroni digital computing is essential for a sequential processor, and its flexibility for various implementations has overshadowed optical techniques, which remain limited due to the lack of suitable electro-optic devices for optical processing. Until such devices are developed, even the significant advantages of optical parallelism, high resolution, and massive connectivity, will not be able to compete with electronic methods. Ultimately, the development of better optical-digital interface devices will enable simultaneous exploitation of the merits of both optics and electronics for data processing.

FRANCIS YU is the Evan Pugh Professor of Electrical Engineering at Pennsylvania State University, 216 Electrical Engineering East, University Park, PA 16802; e-mail: fty1@psu.edu.

The legacy of optical information processing dates back to the 1800s

The following researchers have contributed to the development of optical information processing:

Ernst Abbe—for his discovery of spatial filtering (1873);
Herbert E. Ives—development of spectral filtering (1906);
Dennis Gabor—for his discovery of holography (1948);
Peter Elias—for his contribution of Fourier domain processing (1952);
Edward L. O'Neill—for his contribution of spatial filtering (1956);
Louis J. Cutrona—for his contribution in optical data processing (1960);
Yuri Denisyuk—for his development of reflecting-type holography (1962);
Emmett N. Leith—for his development of transmitting-type holography (1963);
Anthony Vander Lugt—for his contribution of complex spatial-filter synthesis (1964);
Joseph W. Goodman—for his contribution in Fourier optics (1968);
Adolf W. Lohmann—and many others for the development of digital holography (1969);
David Casasent—for his contribution in synthetic discriminant function filter (1980);
Demetri Psaltis and Nabil Farhat—for their contribution in optical neural networks (1985).
Many others have also contributed over the years.

More in Optics