ULTRA-HIGH-SPEED IMAGING: High-speed and ultra-high-speed imaging offers broad application coverage
Many high-speed and ultra-high-speed imaging systems are commercially available and provide multiple capabilities for imaging events over a wide range of length and time scales.
James W. Bales
Many high-speed and ultra-high-speed imaging systems are commercially available and provide multiple capabilities for imaging events over a wide range of length and time scales. In almost all cases, additional lighting beyond the ambient will be required to acquire high-quality images. For some applications, more specialized techniques-such as Schlieren, synchroballistic, or streak photography-can be used to enhance the phenomenon of interest or to reduce the sensitivity required in the camera. Our focus is on electronic imagers as opposed to film-based systems or specialized lighting techniques (such as strobes and pulsed lasers).
By high-speed video systems, we mean those based on CCD or CMOS imagers capable of recording at maximum rates between 2000 to 200,000 images per second (see Fig. 1). The imager is electronically shuttered for an exposure time on the order of 1 ms to 1 µs, and the image is read off of the chip during one inter-image interval. The total number of images stored is usually set by the size of a memory buffer integral to the camera, typically storing thousands of full-frame images. Some systems stream images to a hard disk or videotape for extended recording time.
Ultra-high-speed video systems, in contrast, have maximum frame rates from 100,000 to 200 million images per second. These systems use intensified CCDs to capture the image, gating the intensifier to achieve exposure times as short as 1 to 10 ns (see Fig. 2). The total number of images recorded is typically between 4 and 100 for the higher imaging rates.
High-speed video systems
Early high-speed video systems were based on standard CCD imagers that were greatly overclocked. These systems provided modest resolution, approximately 200 × 200 pixels at 1000 images per second. The introduction of CMOS imagers designed for high image rates transformed the field, eventually enabling standard systems to deliver megapixel-scale images at 500 or 1000 images per second. Current systems technology can produce HDTV-quality images at 1000 pictures per second.
In general, high-speed video systems are electronically shuttered by the imager. The default exposure time is nominally one over the image rate (slightly shorter in actuality). Still shorter exposure times can be selected, with the lower limit on exposure time dependent on the specific system used. Typical values for new cameras are on the order of 1 to 10 µs. Such short exposures are excellent for reducing the blur that is induced by motion of the object being viewed, but extremely bright lighting is required.
The maximum rate at which data can be transferred off the sensor limits the system performance. One might think of the sensor as having a maximum readout rate in pixels per second. For example, a 1-megapixel imager that can read out 1 billion pixels per second can provide 1000 1-megapixel images per second, or 2000 0.5-megapixel images per second. Most commercial cameras implement this approach, providing their maximum image rate (typically 50,000 to 200,000 images per second) at their lowest resolution (sometimes as low as 32 × 32 pixels).
As each image is read off of the sensor, it is stored in a memory buffer. Typically, this buffer can store from 1000 to 10,000 full-frame images. The finite size of the buffer limits the maximum record time for a given resolution and frame rate, usually 1 to 10 seconds of images are held in the buffer. The memory is configured as a ring buffer-once the buffer is full, the next image overwrites the oldest image in the memory. So, if the buffer could store two seconds of images, and the camera was recording for, say, five seconds, only the last two seconds of images would be stored, while the images from the first three seconds would be lost.
Most systems offer great flexibility in triggering the camera to stop taking images. In the simplest case, you have the system stop collecting data when the trigger signal is received. The trigger signal might be a mouse click on a user interface, a switch closure, or a transistor-transistor-logic (TTL) signal. Alternatively, one might set the trigger at any point inside the buffer: for example, upon trigger, collect enough images to fill 40% of the buffer, with the other 60% being the images captured immediately before the trigger signal. For the 2-second-deep buffer considered above, this would be 1.2 seconds before the trigger and 0.8 seconds after the trigger.
Once the images are collected, they are typically transferred to a computer via a high-speed data link. The links include Firewire, Ethernet (where the 1000BASE‑T standard for gigabit Ethernet over copper wiring is becoming more common), and optical fiber, among others. This can be a slow process for some protocols, with the transfer time limiting how rapidly one can conduct experiments. To help in this case, many cameras allow you to partition the memory. The camera records to the first partition, stopping when triggered, at which point the camera starts recording to the second partition, awaiting a second trigger. Now multiple events can be recorded in rapid succession, with the images from each stored in separate memory partitions, before they are all transferred off the system.
Most commercial high-speed video systems offer some tradeoff between spatial resolution (pixel count) and temporal resolution (imaging rate). Consider a 1-megapixel imager that can record a full-frame image at up to 1000 images per second. Assume that it has a 1-Gigabyte memory buffer and that it digitizes each pixel to 8-bit grayscale (although newer cameras can use 10, 12, or even 14 bits to digitize each pixel).
Now, a 1-megapixel image digitized to 8 bits requires 1 megabyte of storage. So, our hypothetical camera could store 1024 images. If the recording rate were 100 images per second, it would take just over 10 seconds to fill the memory buffer. At the maximum rate for full-frame images-1000 images per second-the memory buffer will fill in 1.024 seconds. If one were to increase the recording rate by reducing the size of the image (for example, 4000 images per second at 256 × 256 pixels), the total recording time stays 1.024 seconds. This is because the imager is still being read out at its maximum rate, 1 billion pixels per second.
Ultra-high-speed systems typically couple an image intensifier with a CCD as their imager. Intensifiers are vacuum devices that use a photocathode to convert light into electrons (in a vacuum), use electric fields to accelerate the electrons (and in some systems multiply them in microchannel plates), and then smash the electrons into a phosphor to recreate the image as a distribution of light, rather than charge.
The intensifier provides two critical capabilities for ultra-high-speed imaging. First, exposure time is set by the length of time the voltage is applied to the accelerating electrodes, which can as short as 1 to 10 ns. Second, the intensifier provides amplification of the image (by a factor of 100 to 10,000). This amplification is sorely needed, as very little light can be collected from even a bright source during an exposure time measured in nanoseconds.
Two methods are commonly used to allow the collection of multiple images in rapid succession with an ultra-high-speed system. In one, the voltages on the plates in the intensifier are manipulated to display successive images over different portions of the phosphor. For example, the intensifier might paint the first image in the upper-left quadrant of the phosphor, the second in the upper right, the third in the lower right, and the fourth in the lower left. As with high-speed systems, this results in an increase in temporal resolution by sacrificing spatial resolution.
The second approach uses a multiple-way, image-preserving beamsplitter. The beamsplitter is placed after the camera lens and divides the light into multiple identical, albeit fainter, images. A dedicated intensified CCD captures each of these images (typically 4 to 16). This approach provides extraordinary resolution in time and space, although at significant expense. Some systems are capable of combining these two methods, providing 50 to 100 images separated by as little as 5 ns or so in time.
Resources for high-speed imaging
•professionalinstitute.mit.edu/imaging. A professional development course at MIT, offered over four days in June.
• web.mit.edu/Edgerton/www/HSILinks.html. The Edgerton Center at MIT maintains a Web page of links to companies manufacturing high-speed and ultra-high-speed imagers and related equipment.
• www.rit.edu/~andpph. Professor Andrew Davidhazy, of the Imaging and Photographic Technology Department at the Rochester Institute of Technology, has a comprehensive Web site devoted to high-speed imaging.
• www.hiviz.com. Presented by Loren Winters, of the North Carolina School of Science and Mathematics, this site describes how to take high-speed photos with simple equipment. It also includes a good listing of manufacturers of high-speed equipment.
•The 27th Int’l. Congress on High-Speed Photography and Photonics (www.27hspp.cn) took place in Xi’an, China, in September 2006. Recent proceedings of this biannual conference are available through SPIE (www.spie.org).
•Two out-of-print books may be available via online services: High Speed Photography and Photonics, ed. Sidney F. Ray, Focal Press, Oxford, 1997, and Electronic Flash, Strobe, 3rd ed., Harold E. Edgerton, MIT Press, Cambridge, MA, 1987.
JAMES W. BALES is the assistant director and leads a professional course on high-speed imaging at the MIT Edgerton Center, Room 4-406, 77 Massachusetts Avenue, Cambridge, MA 02139; e-mail: firstname.lastname@example.org;