Image fusion triples resolution of color sensors

Yun Zhang, associate professor in the Department of Geodesy and ­Geomatics Engineering at the University of New Brunswick (Fredericton, New Brunswick, Canada), has introduced an architecture for producing a triple-sensitive color digital frame sensor based on image fusion of low-resolution multispectral (color) images with high-resolution panchromatic (black-and-white) images.

Jun 1st, 2005
Th 178407

Yun Zhang, associate professor in the Department of Geodesy and ­Geomatics Engineering at the University of New Brunswick (Fredericton, New Brunswick, Canada), has introduced an architecture for producing a triple-sensitive color digital frame sensor based on image fusion of low-resolution multispectral (color) images with high-resolution panchromatic (black-and-white) images.1

Zhang proposes that any multispectral imaging system can produce images with a threefold improvement in their fundamental resolution by adding a panchromatic image sensor to the hardware and by using his image-fusion algorithms. The fusion technique developed by Zhang results in minimum color distortion, maximum spatial detail, and natural color and feature integration, all with the highest reported processing speed among all tested fusion techniques.2 Zhang has applied his algorithms to existing Ikonos and QuickBird satellite images (Ikonos and QuickBird satellites are operated by Space Imaging of Denver, CO, and ­DigitalGlobe of Longmont, CO, respectively) to prove his concept.


A 1-m-resolution panchromatic image of the city of Fredericton, NB, from the ­Ikonos satellite (left, top) is fused with a 4-m-­resolution multi­spectral image (left, bottom) using techniques developed at the University of New Brunswick. The resulting 1-m-resolution color image (above) has minimal color distortion compared to other image-fusion techniques. (Space Imaging photos courtesy City of Fredericton, NB, Canada.)
Click here to enlarge image

Because of limitations in spectral bandwidth, the sensitivity of a color digital sensor is usually three times lower than that of a panchromatic digital sensor that has a spectral bandwidth of the entire visible range, or a range from visible to near-IR. Most digital sensors (CCD or CMOS) are monochrome in nature and therefore have no natural ability to record the various color information in red, green, and blue (RGB). Several techniques can be used to obtain this RGB information, including covering individual cells of a digital sensor with red, green, and blue color filters; taking three separate exposures on a single sensor using three different color filters; using a beamsplitter to image onto three different sensors, each with its own color filter; or even using sensors such as one from Foveon (Santa Clara, CA) that uses three layers of pixels embedded in silicon to capture the three primary colors with one exposure. Current spatial resolution of Ikonos and QuickBird multi­spectral satellite ­images are 4 and 2.8 m, respectively.

Color distortion a problem

The panchromatic sensors on ­Ikonos and QuickBird provide high-resolution (1.0 and 0.7 m, respectively), black-and-white satellite images taken concurrently with the lower-resolution color images. Researchers are interested in increasing the overall image resolution through image fusion without upgrading the installed sensors in these satellites. Several image-fusion techniques have been applied with varying degrees of success. Intensity-hue-saturation (IHS) converts a color image from RGB space into the IHS color space and replaces the intensity band with the panchromatic image, producing a fused IHS result. Principal-components analysis (PCA) converts intercorrelated multispectral bands into a new set of uncorrelated components, with one of those components being the panchromatic image. In addition, different arithmetic combinations and wavelet-fusion techniques have been developed. Unfortunately, the most significant problem is color distortion, with fusion quality often depending upon the experience of the researcher or upon the dataset being fused.

The problem of color distortion becomes even more complex for satellite images taken by satellites launched after 1999; the wavelength range of panchromatic images taken from newer satellites extends into the near-IR and significantly alters the gray values of those images, producing even more color distortion in fusion algorithms such as IHS and PCA.

Least-squares technique

To solve the color distortion and operator/dataset dependency, the ­fusion technique developed by Zhang differs fundamentally from other techniques. First, it uses a least-squares technique to find the best fit between the gray values of the image bands being fused and adjusts the contributions of individual bands to the fusion result to reduce the color distortion. Second, it uses a set of statistical approaches to estimate the gray-value relationship between all the input bands to eliminate the problem of dataset dependency and to automate the fusion process.

The image-fusion architecture developed by Zhang has been applied successfully to images obtained by the Ikonos and QuickBird satellites (see figure) and provides a high-resolution color image that effectively triples the resolution of the existing multispectral sensors. Currently, the image-­fusion algorithm and software developed by Zhang have been licensed to PCI Geomatics (Richmond Hill, Ontario, Canada), and is called PCI “Pansharp” ­fusion software.

“To date, the PCI Pansharp users who sent e-mails to me have not found any other image-fusion technique that is competitive in terms of fusion quality and processing speed; the University of New Brunswick and I have filed a U.S. patent application, which will protect our intellectual property,” says Zhang.

REFERENCES

1. Y. Zhang, SPIE Defense & Security Symposium, Orlando, FL, paper 5813-05, (March 30, 2005).

2. Y. Zhang, J. American Soc. for Photogrammetry and Remote Sensing 70, 6, 657 (June 2004).

More in Detectors & Imaging