CMOS image sensors may not outshine CCDs in image quality, but they have more potential for increasing on-chip processing.
Smart image sensors based on complementary metal-oxide semiconductor (CMOS) technology are opening a broad range of consumer applications that charge-coupled-device (CCD) technology cannot fill. But image quality remains the primary proving ground for CMOS in applications that CCD technology does fill. At the same time, scientific demands are calling for further developments in smart-chip capabilities that are also likely to find commercial applications. The two directions can be seen in the differences between commercial and scientific development paths for the CMOS active-pixel sensor (APS) technology developed at the Jet Propulsion Laboratory (JPL; Pasadena, CA) in 1993.
Photobit (Pasadena, CA), which spun off from JPL in 1995 with exclusive licenses to commercialize APS technology, introduced a 1280 x 720-pixel "camera on a chip" targeted at the high-definition television (HDTV) market in October at the 1998 Vision Show (San Jose, CA). In addition, the company expects to deliver products with megapixel resolution in the near future. The JPL lab that originally spawned the Photobit technology—and also supplied the core of Photobit's staff—has developed a camera on a chip also, but the focus has been less on the high levels of resolution to compete for consumer applications and more on integrating high-level processing capabilities, such as autonomous navigation and foveal vision, with immediate applications for space exploration or military use.
Granted, there is significant overlap between these approaches, much of which stems from the primary advantage that CMOS brings to imaging technology—the ability to fabricate image sensors and processing technology onto a single integrated chip. The jury is still out, however, as to whether or not that capability can provide a level of image quality that will actually succeed in weaning consumers away from CCDs.
Image quality spectrum
Currently available CMOS image sensor technology tends to focus on a resolution range from 100,000 pixels for common interface format (CIF) to 300,000 pixels for video graphics array (VGA). At the low end, the range starts with toys and moves up through fingerprint recognition to higher-resolution VGA applications, such as PC video conferencing, digital cameras, and video cell phones. A lot of commercial activity is currently targeted at this potentially lucrative low-to-medium resolution zone. CMOS image quality for the megapixel realm of camcorders and 35-mm cameras, however, has yet to be developed.
Eric Fossum, who led the original CMOS APS development effort at JPL before leaving in 1996 to become chief scientist at Photobit, said that CMOS is already superior to CCDs at the low end of the resolution continuum. He added that CMOS is pretty much equal in the mid-zone and beginning to catch up at the high end. Jerry Berger, who directs the image sensing and image capture business group at Sony (Montvale, NJ)—which introduced two more 400-megabit systems at the Vision Show for its high-performance stable of CCD cameras—isn't convinced that CMOS is ready to compete in the mid-resolution range, however. Berger said Sony's OEM customers have stated that they are not willing to compromise CCD image quality, even for videoconferencing, to get the power consumption and fabrication cost advantages offered by CMOS.
From Fossum's perspective, though-based partially on Photobit's success during the past three years in delivering about 20 custom-designed CMOS chips to OEM customers—part of the CMOS image problem is a holdover from early low-resolution systems that used passive pixel sensors instead of APS technology that includes an amplifier on every pixel.1 There's also just as much art as science in chip design. So even though shrinking design-rule dimensions will tend to improve resolution of CMOS devices automatically, a good bit of tweaking still needs to take place to deliver high-quality images.
"When we started, the first pixel was 40 µm in size, and we brought it down to a 10-µm pixel size [when minimum design rules went from 2 to 0.5 µm]," said Bedabrata Pain (pronounced Pine), who heads the current CMOS APS effort at JPL. "At 40 µm the spatial resolution was poor. But at 10 µm it became very good." He also described some of the tinkering that went into image improvements. A lot of time was spent in offset cancellation to get fixed pattern noise between adjacent circuits down to less than 0.1%. And pixel amplifier design had to be optimized to cut random read noise at the processor output to an uncertainty level on the order of four electrons. Pixel design and timing were juggled to avoid ghost images due to image lag. In addition, the JPL group abandoned the commonly used "rolling shutter" exposure mode for simultaneous electronic exposures or "snapshots" to get rid of image artifacts caused by moving subjects.
Of course, CMOS could be optimized for image quality in the foundry by procedures such as doping of the depletion region to improve collection of photoelectrons, Pain said. "Motorola, for instance, has changed its standard CMOS line to incorporate some of the changes," he said. "It remains to be seen how many other companies do that."
Brain over beauty
Others argue—such as Abbas El Gamal, head researcher on a Stanford University (Palo Alto, CA) project to move analog-to-digital conversion from the chip level to the pixel level—that the real value of the CMOS APS lies in the use of unadulterated CMOS that can be fabricated and processed inexpensively to integrate timing and control circuitry along with analog-to-digital conversion and other functions onto the same chip. Irrespective of the degree to which CMOS ultimately meets or exceeds the image quality of CCDs, El Gamal said, the degree of miniaturization and power efficiency required to "put a camera on your pager" will never be achieved using CCDs.
An example of where this type of work may lead can be seen in a collaborative foveal machine-vision project involving JPL and Amherst Systems (Buffalo, NY), which specializes in electronic warfare and defense avionics. The idea behind the project is to incorporate into a machine-vision system the human visual capabilities of surveying a broad area and focusing on items of interest.
Machine-vision systems are generally based on a uniform distribution of photoreceptors that passively observe a designated field of view, according to Cesar Bandera, head of the machine-vision department at Amherst Systems. The vision of most vertebrates, including humans, on the other hand, is based on a dense concentration of photoreceptors in the center of the retina, or fovea centralis, that thins out progressively (losing about two orders of magnitude in spatial resolution) from the center to the periphery. The system allows high spatial resolution in the area of visual attention (where we point our eyes) along with a wide, low-resolution field of view, with high temporal resolution in the periphery—so that if we sense motion in our peripheral vision, we can reorient our gaze to see what it is.
Is that you, R2?
Prior to working with the JPL, Amherst had managed to replicate the human visual system using a combination of CCD and CMOS APS devices, Bandera said. The resultant system reproduced the desired advantages of human vision, as well as one major disadvantage: the fovea centralis had to be mechanically pointed. About two years ago, however, Pain's group at JPL developed a reconfigurable multiresolution sensor based on an array of sensors that can reconfigure themselves electronically to gaze in any direction while remaining motionless. Remember the cute little R2D2 android from the movie Star Wars? That character would have seemed a lot less cute and even a bit ominous if it hadn't had to spin its little dome around to see where it was going.
After working with the JPL technology transfer program for about a year and a half, Amherst has come up with commercially available foveal cameras, based on the JPL chip, that can be connected to a personal computer. The foveal machine-vision systems feed their image data into a multiresolution video processor and can reconfigure their fields of view within microseconds by electronically combining individual pixels into collective foveal areas. Because the resolution and field of view in these systems are controlled at the chip level, data-processing requirements are cut significantly from what would be required with traditional uniform machine-vision sensors. Bandera said that in a recent project, reconfigurable foveal vision would allow them to cut 10 pounds from the weight of a 75-pound robot and to increase its battery life by 15%.
Bandera and Pain are currently working on second- and third-generation devices with multiple independent foveae to support multiple, small reconfigurable cameras, which Bandera hopes to use in autonomous vehicles and weapon systems. Potential consumer and industrial applications of this technology would be quite numerous, irrespective of comparisons between CCD and CMOS image quality. Imagine a watchdog that you never have to feed, housebreak, or take to the vet.
Beyond CMOS
Of course when CMOS technology transitions to silicon on insulator (SOI) in the next decade or so, similar concerns are likely to arise about optimizing the performance of image sensors. So, as one might expect from a research group that is looking five to ten years ahead of immediate commercial needs, Pain's research group at JPL has already solved that problem.
"Initially, it was thought that, if SOI becomes the dominant technology, APS would fall flat on its face, because SOI is a pixel technology and will not yield enough optical collection for a good imager to be built," he said. "So we have come up with a new way of imaging on SOI to improve imaging performance, while taking full advantage of SOI to get higher packing density, radiation hardness, and integration on chip. These are the front-end one-of-a-kind devices that we build."
REFERENCE
- 1. E. R. Fossum, IEEE Micro 18(3), 8 (May/June 1998).