DIGITAL-SIGNAL PROCESSING

Researchers at the University of California, Los Angeles (UCLA) have found an optical way to speed up the analog-to-digital (A/D) conversion process that currently limits the use of available digital-signal-processing capacity for high-performance communication and radar systems. The new method is based on technology that has already been developed for telecommunications, particularly in the area of wavelength-division multiplexing, but requires better signal-to-noise performance than do telecom

Jan 1st, 1999

DIGITAL-SIGNAL PROCESSING

Optics circumvent bottleneck in A/D conversion

Researchers at the University of California, Los Angeles (UCLA) have found an optical way to speed up the analog-to-digital (A/D) conversion process that currently limits the use of available digital-signal-processing capacity for high-performance communication and radar systems. The new method is based on technology that has already been developed for telecommunications, particularly in the area of wavelength-division multiplexing, but requires better signal-to-noise performance than do telecom applications.

A wide range of defense and telecom applications stand to benefit from development of the UCLA method. The Defense Advanced Research Projects Agency (DARPA) has provided multimillion-dollar funding for a four-year, multi-institution consortium led by UCLA to build a 40-Gsample/s, 8-bit, A/D converter.

The original problem might be thought of as a need to balance the contradictory effects of Moore`s Law and Murphy`s Law. The doubling every 18 months of chip capacities described in Moore`s Law has currently driven digital-signal-processor performance to several giga operations per second, according to Bahram Jalali, the principal investigator on the project.

Such fast system speeds offer tremendous opportunities for high-performance digital-signal processing in applications ranging from radar and electronic warfare to cellular telephony. But as Murphy`s Law would have it, a bottleneck has arisen in the mature technologies used for A/D conversion. Instead of doubling annually, the rate of improvement for a given speed is only about 1 bit every five years.

"The fastest analog-to-digital converter that I can buy with 10-bit resolution is about 100 Mbit/s," Jalali said. "If you compare that to several giga operations per second of processing power, there is a huge gap. As a result, the A/D converter is the bottleneck in system performance."

New approach

Because it didn`t seem likely that A/D processing speeds would catch up with PCs anytime soon, Jalali`s group took another approach. "We said, `Instead of making the A/D converter faster, can we make the [analog] signal slower?`" If so, the high-speed analog signal could be segmented and interleaved into parallel channels where each channel could be slowed down enough for A/D conversion with available technology without losing any information from the original signal.

The UCLA researchers came up with a method of time-stretching incoming analog signals based on work that had already been done in the optical community on stretching and compression of Gaussian signals. The method developed for optical pulses involved a three-step process of dispersing the signal, chirping it, and dispersing it again. To apply this technique to real-time signals, the UCLA team came up with the segmenting and interleaving scheme mentioned above and also observed that the signal-stretching process could be simplified by using a chirp bandwidth much larger than the bandwidth of the signal to be stretched.

"If I use a very short [femtosecond-width] optical pulse, then I can obtain a bandwidth of several hundred terahertz, whereas the signal I am trying to stretch is, at most, several hundred gigahertz," Jalali said. "So by performing a chirp using ultrashort pulses, we can satisfy the condition [of a much higher chirp bandwidth]. That gives us a very simple system that we can implement using commercially available components."

The wide-bandwidth (7.5 THz) optical chirp pulse for the time-stretching technique is generated by passing a 160-fs pulse from a modelocked, erbium-doped fiber ring laser through 1.1 km of single-mode optical fiber. Intensity modulation of the chirp pulse by the analog input signal is performed in an electro-optic (lithium niobate) modulator. The signal is then time-stretched by passing through a second stage of single-mode optical fiber of 7.6 km in length (see figure).2

Hassaun Jones-Bey

REFERENCES

1. A. S. Buhshan, F. Coppinger and B. Jalali, Electron. Lett. 34(9), 839 (April 30, 1998).

2. B. Jalali, A. S. Bhushan, and F. Coppinger, "Photonic Time-Stretch: A Potential Solution for Ultrafast A/D Conversion," IEEE Microwave Photonic Conference, MWP `98, Princeton, NJ (October 1998).

More in Research