July 6, 2012 2:02 PM by Jeff Hecht

    FPA - focal-plane array , an array of light detectors placed in the focal plane of a lens or optical system to record images, such as the sensor chip that takes the place of film on a camera. Often refers to infrared detector arrays, but also can be applied to arrays which respond to other parts of the spectrum, particularly visible and radio bands.

    The concept of a focal-plane array is a natural one in the age of digital photography, but a Google Book search finds 10 references dating from the 1960s, mostly referring to infrared detectors for military or astronomical applications. It may have originated in the military; one reference was to a 1963 government document on the defense budget. Extending the search to the start of 1980 found another 450 publications that used the term, too many to examine in detail.

    FPA wasn't a common term even then. It's not listed in the index of my 1978 edition of The Infrared Handbook . The only sub-entries under "array" are "silicon diode vidicon," "staring CCDE signal processing," and "in scanners." But those entries do show how imaging technology was evolving toward the FPA concept.

    The vidicon was an imaging tube used in analog television, which recorded images as charges on a photoconductor and read them out with an electron beam. By 1978, arrays of silicon photodiodes were available, and had begun to be used for electronic imaging, leading to odd hybrid terms to describe how silicon diode arrays had come to replace television-like cameras in the near infrared.

    Scanning and staring refer to different ways of recording images with a limited number of detectors. Today pixels are cheap, so it's natural to put a chip containing a million or more light-detecting elements into the focal plane, where it stares at the scene being imaged. In the 1970s detectors were much more expensive, especially in the infrared, so electronic images often were recorded by scanning a linear array of detectors across the image plane (or by scanning the image across the detector array).

    Both technologies live on today. Scanning arrays are standard in flatbed scanners, which drag a linear sensing array across a page. Staring arrays have gone much farther. They are standard in cameras for applications including high-resolution scientific imaging in the near infrared. A gigapixel FPA with response in three bands stretching from 250 to 1100 nm will be launched next year on the European Space Agency's Gaia satellite .

    FPA comprised of Geiger-mode avalanche photodiodes was packaged in a ceramic case with microlenslet array, readout circuit, and thermoelectric cooler by Princeton Lightwave (Cranbury, NJ).


    June 21, 2012 3:49 PM by Jeff Hecht

    POF - Plastic Optical Fiber (also Polymer Optical Fiber): An optical fiber made from a polymer or plastic.

    Plastic light guides have a surprisingly long history. Soon after its invention in 1928, the transparent plastic polymethyl methacrylate (PMMA) began replacing quartz in a variety of applications, including bent rods used as dental illuminators. Early developers of fiber-optics tested transparent plastics as well as glass in the 1950s, and plastic was used as a cladding on some of the first clad optical fibers.

    For many applications, plastic had important advantages over glass. Plastic is lightweight, inexpensive, and flexible rather than brittle. Thin sheets of clear plastic, like thin sheets of window glass, seem quite transparent. But reducing the attenuation of plastic proved far more difficult than improving the clarity of glass, and plastic was left in the dust when the first low-loss silica fibers were demonstrated in 1970s.

    Developers of plastic fibers turned to other light-guiding applications where fiber loss was less important than in telecommunications. By the early 1970s, bundles of plastic fibers were being used in decorative lamps, with the fiber ends splayed out to sparkle with light at their ends. Fiber-optic pioneer Will Hicks strung plastic fibers through a plastic Christmas tree, which he hoped to sell for holiday decoration until it failed an impromptu fire test at a New York trade show. Despite such reverses, plastic fiber-optic decorations live on.

    POF communications also survives in short-distance applications where the low cost and ease of termination of plastic fibers offsets their high attenuation. One example is the Media Oriented System Transport (MOST) network for automobiles, which red LED transmitters to transmit signals through up to 10 meters of POF linking electronic systems in cars . Auto mechanics don't need an expensive fusion splicer to connect the large-core step-index fibers. Japanese researchers have developed graded-index POFs with bandwidths high enough to transmit 4.4 gigahertz up to 50 m at 670 to 680 nm , which developers hope could lead to applications in home networking.

    Plastic optical fiber does have plenty of competition for the acronym POF, including polymer optical fiber and another optical term, "plane of focus." Acronymfinder.com lists 34 possible definitions ranging from the journal Physics of Fluids and the Pakistan Ordnance Factory to the dating site "Plenty of Fish" and "pontificating old fart".  But plastic fiber fans can take heart -- the site ranks Plastic Optical Fiber ranks as the most-used definition of POF.

    Modern decorative fiber-optic lamp, courtesy of Keck Observatory.


    June 11, 2012 11:33 AM by Jeff Hecht
    QCL: Quantum Cascade Laser, a semiconductor laser lacking a junction in which an electron passes through a series of quantum wells. In each quantum well, the electron emits a photon on an inter-subband transition before tunneling through to the next quantum well. QCLs are important sources in the mid- and far-infrared, including the terahertz band.

    Semiconductor and diode lasers were long considered synonymous after demonstration of the first semiconductor diode laser in 1962, although other types had been proposed and lasing had been demonstrated in semiconductors without junctions that were pumped optically or with electron beams.

    Russian physicists Rudolf Kazarinov and R. A. Suris took the first step toward the QCL in 1971 by suggesting electrons in a superlattice could tunnel between adjacent quantum wells, but the technology needed to make them was not yet available. The development of molecular beam epitaxy (MBE) revived interest in such complex semiconductor structures, and in 1986 Federico Capasso, Khalid Mohammed, and Alfred Cho of Bell Labs suggested that electrons tunneling through stacks of quantum wells might be used to make infrared lasers. 

    Their 1986 paper clearly shows the basic idea, but demonstrating QCLs took eight years, as long as it took to go from the first pulsed cryogenic diode laser to room-temperature operation. Not until 1994 did Jerome Faist, Capasso, Cho, and Deborah Sivco report  the first QCL in a Science paper where they coined the evocative phrase "quantum cascade" to describe its operation; a Google Book search fails to find any earlier use of the phrase. Their device produced 8 milliwatt (mW) pulses at 4.2 micrometers (µm ), but like the first diode lasers it required cryogenic cooling, with highest power at 10 degrees Kelvin (K), and operation at up to 90 K with a threshold of 14 kiloamperes (kA) per square centimeter (cm2 ), comparable to the threshold of the first diode lasers.

    Today, QCLs are in the mainstream of laser technology, operating continuous-wave at room temperature with multiwatt output in the mid-infrared . Available commercially, QCLs operate through much of the infrared all the way to the terahertz band .

    Electron emits a cascade of photons as it tunnels through a series of quantum wells in this simplified view of a QCL.


    June 4, 2012 4:03 PM by Jeff Hecht

    DWDM: Dense Wavelength Division Multiplexing: Wavelength-division multiplexing with signals closely spaced in frequency.

    CWDM: Coarse Wavelength Division Multiplexing: Wavelength-division multiplexing with signals broadly spaced in wavelength.

    The division of the radio spectrum into broadcast channels made the idea of wavelength-division multiplexing (WDM) obvious to any serious student of communications by the time the laser was invented. But how to divide the optical spectrum was far from obvious. In the early 1980s AT&T picked three widely spaced channels for the first commercial system linking Boston to Washington, GaAlAs lasers at 825 and 875 nm, and an InGaAsP LED at 1300 nm. But single-mode fiber transmission quickly eclipsed its capacity and WDM was largely abandoned.

    Invention of the erbium-doped fiber amplifier (EDFA) in the late 1980s revived interest in WDM because it could amplify multiple signals across a range of wavelengths with little crosstalk. The question quickly became how tightly wavelengths could be packed across the 1550 nm erbium-fiber gain band. That required developing new filter technology to slice the spectrum finely. By 2000, channel spacing was down to about 0.4 nm or 100 GHz.

    To make systems compatible, the International Telecommunications Union (ITU) defined a standard dense frequency grid spanning the erbium gain band. Each DWDM was 100 GHz wide, with the standard specifying channels in frequency units, such as 193.10, 193.20, and 193.30 THz, although optical optical engineers translated them into wavelengths (1552.52, 1551.72, and 1550.92 nm, respectively).

    DWDM was designed for expensive high-performance long-haul systems, but WDM also could enhance capacity of shorter fiber systems, if costs could be cut by using cheaper optics with less-demanding specifications. That led ITU to develop a "coarse" grid, for which they specified CWDM channels in wavelength units, spaced 20 nm apart from 1271 to 1611 nm, used in metro and access networks

    That's the official CWDM grid, but it hasn't stopped designers from multiplexing other combinations of widely-spaced WDM signals, such as cable-television or fiber to the home (FTTH) systems transmitting downstream at 1550 and (sometimes) 1480 nm, and upstream at 1310 nm. 

    So these divisions of the spectrum do have standard meanings. 

    Comparison of CWDM and DWDM spacing. 


    May 31, 2012 9:46 AM by Jeff Hecht

    SWIR - Short-Wavelength (or Short-Wave) Infrared, wavelengths at or near the short-wave end of the infrared.

    You might think the infrared band stretching from the red end of the visible spectrum to three micrometers (µm) is used widely enough that the it would have a well-accepted definition. Dream on. Everybody has their own take, and they can't even agree whether that part of the spectrum should contain one or two bands.

    Wikipedia's "Infrared" entry lists several definitions of SWIR, without trying to resolve the conflicts or explain the differences. It first cites a "commonly used subdivision scheme" that defines SWIR as wavelengths from 1.4 to 3 µm, where water absorption is high, with a separate near-infrared (NIR) band at 0.75 to 1.4 µm where water absorption is low. It's not a bad definition, but why is it referenced to a book titled Unexploded Ordnance Detection and Mitigation ?  The International Commission on Illumination makes a similar split at 1.4 µm because shorter wavelengths (the IR-A band) can reach the retina, but longer wavelengths (the IR-B band) are absorbed within the eye. Both divisions make sense from the standpoint of illumination.

    Other divisions are based on sensor response. Wikipedia cites a split between NIR extending to the long-wave limit of silicon detectors near 1.0 µm, and SWIR from 1.0 to 3.0 µm, given in Miller and Friedman's Photonic Rules of Thumb . A Raytheon wall chart similarly defines NIR as 0.7 to 1 µm, and SWIR as 1 to 2.7 µm. However, makers of other detectors pick other dividing points. An article by Nova Sensors (Solvang, CA) defines SWIR as 0.9 to 1.7 µm, the response range of the InGaAs sensors used in their SWIR cameras . However, sometimes the two ranges overlap. Headwall Photonics (Fitchburg, MA) lists its NIR hyperspectral imaging sensor as responding to 0.9 to 1.7 µm and its SWIR version as responding to 0.95 to 2.5 µm .

    Some don't bother splitting the band at all. The International Organization for Standardization (ISO) 20473 standard calls 0.78 to 3 µm NIR, and the McGraw-Hill Dictionary of Scientific and Technical Terms calls NIR 0.75 to 3 µm. Neither lists SWIR.

    I could go in agonizingly nit-picking detail, but the lesson is clear -- SWIR (and NIR) can be useful labels for parts of the infrared, but you need to check the numbers to be sure what they mean.


    May 22, 2012 3:34 PM by Jeff Hecht

    VCSEL - Vertical Cavity Surface-Emitting Laser , a type of semiconductor laser in which the resonant cavity is perpendicular to the junction layer and light is emitted from the surface of the chip.

    All early diode lasers oscillated in the plane of the junction or active layer and emitted from the edge of the semiconductor chip. This design is logical because keeps laser oscillation in the plane of the active layer where recombination of current carriers produces a population inversion, so the round-trip gain in the cavity is high. However, because the active layer is very thin, the beams from edge-emitting diode lasers diverge rapidly, particularly in the direction perpendicular to the active layer.

    VCSELs oscillate vertically, in a cavity formed by reflective layers on the top and bottom of the chip. Single-pass gain much lower than in edge emitters because only a very thin layer of gain material is between the cavity mirrors but the emitting aperture typically is much wider than the active layer is thick, producing a higher-quality, circular beam. First demonstrated in 1979, VCSELs went through a series of structural refinements to improve their performance and fabrication processes. In current designs, one or often both of the reflectors are multilayer Bragg reflectors containing many the tens of pairs of layers needed to produce the very high reflectivity needed to sustain oscillation with only a thin gain medium.

    The short length of VCSEL cavities brings some advantages, including allowing direct current modulation at speeds to 40 gigabits per second  (Gbit/s) and without the mode hopping possible in edge emitters. Yet ironically, a commercial attraction of the more complex VCSEL design is that it makes them more economical. All the hard parts of VCSEL production are done by highly automated semiconductor manufacturing techniques. The resulting VCSELs can be tested on the wafer, unlike edge emitters, which can't be tested until the wafer is diced into chips. That combined with their larger emitting area greatly reduces packaging expenses, which account for more of finished product costs than the laser chips. So more complex winds up being cheaper, as well as better for many applications.

    And VCSEL types and applications keep growing. Recently, a VCSEL-type cavity was used in optically pumped colloidal quantum dot lasers emitting red, green and blue light .

    Complexity in a VCSEL:  Beam Express (Lausanne, Switzerland) makes 1310 nm VCSELs by bonding AlGaAs/GaAs distributed Bragg reflectors on top and bottom of an InAlGaAs/InP gain layer containing strained quantum wells and a tunnel junction because high-contrast Bragg reflectors are not practical in InP-based materials.


    May 15, 2012 10:34 AM by Jeff Hecht
    EUV or XUV:  Extreme Ultraviolet, the short-wavelength or high-energy end of the ultraviolet spectrum, from 120 (or 200) nanometers to about 10 nanometers (nm).

    Atmospheric transmission in the ultraviolet decreases sharply with wavelength, and air absorption is so strong that wavelengths shorter than 150 to 200 nm must be studied in a vacuum. The first explorers of the EUV spectrum were astronomers using satellite instruments. The quest to squeeze more and smaller transistors onto semiconductor chips has changed that by shrinking chip features to dimensions on the scale of EUV waves. The semiconductor industry is now testing the first wave of EUV photolithography systems operating at 13.5 nm.

    Such a sudden technological interest in a long-neglected part of the spectrum is a great recipe for muddied definitions of spectral bands. Astronomers consider the 121 nm Lyman alpha line of hydrogen to be the major landmark in the EUV spectrum, so they picked 120 nm as the long-wave end of the EUV band. However, the major technological landmark for the semiconductor industry is the 13.5 nm lithography wavelength, so they often define EUV as having a wavelength of 13.5 nm. The laser community uses a broader definition of 10 to 120 or 200 nm as it explores a broader range of applications, enabled by new techniques such as high-harmonic generation.

    The EUV largely overlaps the older vacuum-ultraviolet (VUV) band, usually defined as 10 to 200 nm. A draft document from the International Standardization Organization (developed for space observations) also defines two other overlapping bands, the far-ultraviolet (FUV) at 122 to 200 nm and the germicidal Ultraviolet C (UVC) band at 100 to 280 nm.

    XUV often is an alternative abbreviation for extreme ultraviolet, substituting the fashionable X for the relatively drab E, but the ISO lists it as an abbreviation for soft X rays at 0.1 to 10 nm. A quick Google search gives the impression XUV is the more popular form, with 5.6 million hits compared to 3.2 million for EUV and 1.8 million for VUV. But that's misleading because XUV also is shorthand for "crossover utility vehicle," a sport-utility vehicle based on a car rather than a truck. That usage may be popular on the Internet, but it was new to me--and for years I've been driving a Toyota RAV4, which is classed as an XUV.


    May 7, 2012 5:05 PM by Jeff Hecht
    FIR - Far InfraRed, the long-wavelength end of the infrared spectrum. As is typical of infrared bands, definitions vary.

    When Arthur Schawlow and Charles Townes proposed extending the maser principle to frequencies well above the 22 gigahertz (GHz) of the ammonia microwave maser reached in 1954, they targeted frequencies three orders of magnitude higher, near 300 terahertz (THz), corresponding to one micrometer (1 µm) in the near-infrared.  Their choice reflected the technological reality of the time, the long wavelength end of the infrared was a terra incognito of the electromagnetic spectrum, little explored because few sources and detectors were available. One reason that wavelengths longer than about 15 µm came to be called "far-infrared" in the early days of lasers probably was that that part of the spectrum seemed far out of reach.

    Technology has come a long way since then, but the far-infrared remains beyond the well-developed parts of the infrared; although, as with other parts of the infrared, the specified wavelengths differ among definitions.

    Wikipedia's Infrared article lists multiple definitions of the far-infrared. The first is 15 to 1000 µm, putting it thermal infrared where blackbody emission peaks around room temperature, longer than the mid-infrared and some definitions of the long-wavelength IR. It also cites the International Standardization Organization's definition of 50 to 1000 µm, a definition also used in the McGraw-Hill Dictionary of Scientific and Technical Terms . The infrared article notes that astronomers have their own definition, with 25 to 40 µm the short end and 200 to 350 µm the long end.

    Oddly, the bands specified for "far-infrared lasers" differ. Wikipedia says their wavelengths range from 30 to 1000 µm, close to the 40 to 1000 µm I used in The Laser Guidebook . The McGraw-Hill Dictionary lists a far more limited range, from "well above 100 µm" to 500 µm.

    A couple decades ago, such inconsistent definitions didn't matter much, because wavelengths longer than 30 µm were a sparsely inhabited part of the spectrum, largely absorbed by air, hard to detect, and even harder to use. Now new technology is upscaling the unfashionable far-infrared neighborhood and redefining it as the terahertz band , which Wikipedia defines as from 100 to 1000 µm, or 300 GHz to 3 THz.

    Plot of atmospheric opacity shows the strength of atmospheric absorption in the far-infrared. (Wikipedia art, modified )


    May 2, 2012 7:28 AM by Jeff Hecht
    LWIR - Long-Wavelength InfraRed: an infrared band at wavelengths longer than the mid-infrared. One common definition is from 8 to 15 micrometers, also known as the thermal infrared, but there is no generally accepted standard.

    The infrared spectrum sprawls from the edge of the visible, nominally 0.7 µm, to about one millimeter, such a broad range that it demands subdivision. The definitions of mid- and long-wavelength bands may have grown from the atmospheric transmission windows at 3-5 µm, and 8-14 µm. Those bands generally require different detectors, and also were a handy division between the blackbody peaks of "hot" objects and those of "body-temperature" objects. The strongly absorbed wavelengths in between didn't matter much as long as the infrared was mostly used for looking through the air. Longer wavelengths were lumped as the "far-infrared," a vast region extending to about one millimeter that seemed of little use because atmospheric transmission was spotty and instrumentation was poor.

    Atmospheric transmission has not changed, but new infrared detectors and sources have opened up previously little-used regions of the infrared, and satellites have opened the whole infrared spectrum to astronomers. New applications have emerged, such as LWIR monitoring of beehives . That has made drawing dividing lines problematic, particularly on the ends of the LWIR. Should the ends be defined by the atmospheric windows or at some other points? One suggestion was to define each band as an octave wide, spanning a factor of two in wavelength or frequency, but that logical idea failed a crucial practical test because it could not fit both the 3-5 µm and 8-14 µm bands in adjacent octaves. So we're stuck with informal definitions that depend on things like atmospheric windows, and detector ranges, and differ between fields like lasers, astronomy, and night vision.

    It could be worse. Geologists built their time scale for the Earth's history on the boundaries between solid rocks, then found that their calendar changed every time a better way was found to date the rocks.


    April 24, 2012 2:56 PM by Jeff Hecht
    MWIR or MIR - Mid-Wave-InfraRed or Mid-InfraRed, a range of wavelengths nominally in the middle part of the infrared spectrum, defined in a variety ways.

    The terms hiding behind acronyms sometimes can be more confusing than the acronyms themselves. The diverse definitions for "mid-infrared" illustrate the problem. The name clearly implies that MWIR/MIR should be  the middle of the infrared spectrum, but the infrared is a vast region, sprawling from 700 nm at the long end of the visible range to around 1 mm at the upper end of the radio/microwave band. What is the middle?

    On a logarithmic scale, it seems simple enough--10 to 100 µm fits right in the middle. But nobody uses that definition. Wikipedia, today's great arbiter of popular culture, defines the MWIR as 3 to 8 µm, but that section of the "infrared" article bears a warning dating from July 2006 that it needs to be cleaned up! The 3–8 µm definition follows the recommendations of the International Commission on Illumination, which considers the MWIR as the short end of the IR-C band from 3 µm to 1 mm.

    However, other definitions abound. In its tabulation of optical spectrum bands, ISO, the International Standardization Organization, defines MWIR as 3-50 µm. NASA's Infrared Processing and Analysis Center at Caltech defines the astronomical mid-IR as stretching from 5 µm to 25 or 40 µm. Military system designers traditionally define 3 5 µm as MIR, the window used by heat sensors on missile guidance systems. The McGraw-Hill Dictionary of Scientific and Engineering Terms does not list mid-infrared, but defines "intermediate infrared radiation" as from 2.5 to 50 µm.

    Who's right? It depends on your viewpoint. In my recent mid-infrared lasers and applications webcast, I spoke from the laser industry viewpoint set the boundaries as 2 to 12 µm--starting beyond the telecommunications band and extending to include the carbon-dioxide laser band, as shown in the image. But when I wrote about uncooled infrared cameras in the April issue, I wrote from the detector viewpoint, and called the thermal imaging band from 7.5 to 14 µm "long-wave infrared."  In truth, detectors used in the 3 5 µm and 7.5- to 14-µm bands differ much more than the laser used. But on reflection I have to wonder why the spectral bands should differ between the light source and the detector.  


    April 13, 2012 4:45 PM by Jeff Hecht
    An avalanche photodiode (APD) is a semiconductor photodetector in which incident light generates a photocurrent, which is then multiplied by an avalanche process to give a stronger signal.

    An avalanche photodiode is a single device that incorporates two distinct semiconductor stages. The first is a photodiode detector, in which light with energy above the bandgap of a semiconductor delivers enough energy to valence electrons for them to enter the conduction band. The electron leaves behind a hole in the valence band, which also functions as a current carrier. Application of a voltage across the device pulls the electrons and holes in opposite directions.
    The second stage applies a strong reverse bias across the semiconductor to accelerate electrons. When electrons reach high enough velocities, they can ionize other atoms in the semiconductor, producing an avalanche of electrons. This multiples the original photocurrent and produces a much stronger response than a simple photodiode. Typically, silicon APDs are biased with about 100 V to multiply photocurrent by around a factor of 100. Further increasing the reverse voltage increases the dark current as it approaches the ionization threshold of the semiconductor. Breakdown occurs at a reverse bias of about 150 to 200 V for silicon, depending on device design, and at different voltages in other semiconductors.

    You can think of APDs as solid-state counterparts of photomultiplier tubes (PMTs), but they are far from plug-in replacements. As you expect from solid-state devices, APDs are much smaller and their bias voltages are much lower--about a tenth of those in PMTs. However, PMTs are less subject to noise, their gain does not depend as strongly on bias voltage, and they can be engineered to respond to different wavelengths than APDs, so they are among the few survivors of the vacuum-tube era.
    New designs and new materials are extending the range of APDs for applications ranging from single-element fiber-optic detectors to focal-plane arrays for imaging. Germanium/silicon ADPs have reached a gain-bandwidth product of 105 GHz, attractive for high-speed optical interconnects. APD elements designed for single-photon counting can be assembled in arrays to make a single-photon counting camera.
    Gain or multiplication factor of an APD increases sharply as voltage approaches breakdown, but so does dark current, limiting usable gain. (From Jeff Hecht, Understanding Fiber Optics: 5th edition [Pearson Prentice-Hall, 2006])


    April 4, 2012 1:33 PM by Jeff Hecht
    PMT - Photomultiplier Tube, a vacuum tube light sensor in which input photons cause a cathode to emit electrons that are amplified through a chain of electron amplifiers called a multiplier. Invented in 1934, PMTs are still in use. They offer high gain, low noise, reasonably fast response, and a large collecting area, all important for detecting faint signals.

    First observed in 1887 by Heinrich Hertz, the light-induced emission of electrons became an important puzzle because classical physics could not explain why electron emission occurred only for wavelengths shorter than a threshold value rather than depending on light intensity. Albert Einstein won the Nobel prize in 1921 for showing that the photoelectric effect was due to photons needing to have a threshold energy to free electrons from atoms.

    The first photoemissive detectors were vacuum photodiodes, in which light illuminated a metal cathode, freeing electrons collected by an anode when a voltage was applied across the tube. Alkali metal cathodes were used to detect visible light. In the days before semiconductor electronics, these devices were called photodiodes or photocells. They were too insensitive for use in early electronic television cameras, so engineers added amplification stages, called dynodes, inside the tube; electrons collided with the dynodes, producing additional or secondary electrons. A series of acceleration stages and dynodes multiplied the photocurrent, thus earning the name photomultiplier. Developed in the 1930s, PMTs became the detectors of choice for applications demanding high sensitivity, low noise, and high speed.

    This high performance has made the PMT a remarkably durable technology, one of the last vacuum tubes that is still a standard product in the age of solid-state photonics. Continuing refinements in design and packaging have adapted PMTs for modern applications such as single-photon counting. PMTs continue to new challenges, such as silicon photomultipliers, containing a hundred to several thousand tiny avalanche photodiodes (APDs) connected in parallel for single-photon detection, also called multichannel APDs. But PMTs just keep plugging along.

    Modern metal-channel dynode PMT, shown in cutaway. (Image courtesy of Hamamatsu)


    March 29, 2012 3:15 PM by Jeff Hecht
    WDM: Wavelength-division multiplexing, transmission of separate signals simultaneously at separate wavelengths through the same transmission medium, usually an optical fiber.

    Multiplexing combines two or more signals for simultaneous transmission through the same medium. It was first used to increase capacity of 19th century telegraph wires. Later, frequency-division multiplexing divided the radio spectrum into separate broadcast channels. Each station was assigned a fixed transmission frequency, and listeners tuned the frequency of their receiver to select a station. Frequency-division multiplexing let cable television networks pack many video channels into frequency slots for transmission through copper cable.

    Optics specialists think of wavelength rather than frequency, so when Bell Labs tried sending multiple laser wavelengths through the same hollow light pipe in the 1960s, they called it wavelength-division multiplexing. The appeal was understandable, and the Bell System tried WDM again in 1980 when it designed its first high-capacity fiber-optic system along the Northeast Corridor from Boston to Washington. But that system had a fatal flaw; it used multimode fibers, so every seven kilometers it needed a repeater to separate the wavelengths, detect the signals separately, amplify each one electronically, and drive separate transmitters that had to be multiplexed together. Single-mode fiber won hands down.

    Wideband erbium-doped fiber amplifiers (EDFAs) revived interest in WDM because they could amplify many separate laser signals near 1550 nanometers. For long-haul, high-speed transmission, optical channels were packed close together for dense-WDM or DWDM systems. However, a committee from the International Telecommunications Union, apparently dominated by radio engineers, specified DWDM channels in frequency units. Typically 50 gigahertz wide, those channels initially transmitted 2.5 or 10 gigabits per second, and now can carry up to 100 GHz using coherent transmission.

    Coarse-WDM (CWDM) came later, allowing the use of lower-cost multiplexing and demultiplexing optics in lower-speed, shorter-distance WDM systems, such as dividing 10-Gigabit Ethernet traffic among four lower-speed CWDM channels. Optical engineers apparently won on the ITU committee handling CWDM, because they specified CWDM channels in wavelength, setting center wavelengths every 20 nm from 1270 to 1610 nm.

    CWDM and DWDM grids. The wavelengths for the DWDM grid are approximate; the specification are defined in terahertz.


    March 23, 2012 12:04 PM by Jeff Hecht
    CPA - Chirped Pulse Amplifier, an optical amplifier that generates very high peak powers in very short pulses by stretching the pulse duration before amplification, then compressing the pulse after amplification. Chirped pulse amplification can produce peak powers in the terawatt range from small systems, and is vital in building petawatt lasers.

    Nonlinear effects inherently limit the amount of optical amplification possible in a gain medium. Effects such as Brillouin scattering reduce gain, and effects such as self-focusing can cause optical damage. These effects are proportional to the peak power in the medium, putting an upper limit on the gain possible.

    Chirped pulse amplification circumvents this limit by spreading the energy in the pulse over a longer period of time, thus reducing the peak power throughout the longer pulse. This is done by sending the input pulse through a medium with a high wavelength dispersion, such as a pair of gratings or prisms, or a length of dispersive optical fiber. The pulse that emerges from the dispersive medium is chromatically dispersed, with the short wavelengths at one end and the long wavelengths at the other. The degree of dispersion depends both on the medium and the spectral width of the pulse. In practice, chirped pulse amplification works best with pulses lasting tens to hundreds of femtoseconds, which are inherently broadband.

    The longer dispersed pulse is amplified in a broadband gain medium, then passed through a medium with dispersion of the opposite sign, so the wavelengths that passed first through the amplifier are delayed and those that passed through the amplifier later in the pulse can catch up. The pre-amplification and post-amplification dispersion do not have to cancel each other out, although the minimum pulse duration still depends on the spectral bandwidth.

    Chirped pulse amplification also can be used in optical parametric amplifiers, which have broader bandwidth than laser oscillators and thus can be chirped more strongly to generate higher peak powers. Optical parametric chirped-pulse amplification (OPCPA) will be used in Europe's Extreme Light Infrastructure.

    How a CPA works. A pair of gratings that delay the blue end of the spectrum stretches input pulses about a factor of 1000 in duration. Those pulses then pass through a broadband amplifier, and the higher-power output is compressed by a second pair of gratings that delay red wavelengths to produce a high-energy ultrashort pulse.


    March 16, 2012 9:12 AM by Jeff Hecht
    LIDAR or LADAR: LIght Detection And Ranging or LAser Detection And Ranging, sometimes called "laser radar," the optical counterpart of radar, which measures the distance to an object by timing how long a pulse of light takes to make a round trip between the transmitter and the object.

    As acronyms go, LIDAR and LADAR are a rarity--near-identical twins with essentially the same meaning. They were coined to describe the same concept, using pulses of laser light instead of radio waves to measure distance. Radar itself is an acronym for RAdio Detection And Ranging, coined by the U.S. Navy in 1941, so it was logical to replace the radio part of the acronym with an optical term. However, some people replaced the radio with light to make LIDAR and others replaced it with laser to make LADAR. Both terms are still used--although Google searches put LIDAR far in the lead, with 19.8 million hits compared to a mere 503,000 for LADAR.

    The earliest and simplest lidars were laser rangefinders, which used laser pulses to measure the distance to a military target or some other fixed object. Lidars also can measure speed by firing a series of pulses and calculating how fast the measured distance changes, an approach used in police laser radars because it's simpler than Doppler measurements.

    More advanced lidars scan the beam across a target area to measure the distance to points across its field of view, producing a three-dimensional profile. This technique has a wide range of uses. Lidars looking down from aircraft or satellites have profiled terrestrial terrain, and the laser altimeter on the Mars Global Surveyor spacecraft similarly profiled the surface of Mars. Combining lidar profiles of terrain before and after an earthquake can reveal changes caused by the tremor. Lidars can map archeological dig sites or dinosaur trackways too large or too fragile to record in any other way.

    Specialized lidars make other measurements. Differential absorption lidar can profile the abundance of water vapor in the atmosphere. Doppler lidars measure changes in the spectrum of pulses scattered by the air to determine wind speed and turbulence.


    March 6, 2012 3:03 PM by Jeff Hecht
    Yttrium aluminum garnet (YAG) is a synthetic crystal doped with neodymium or other rare earths for use in bulk solid-state lasers. Although neodymium lasers were first demonstrated in a calcium tungstate host, YAG has long been the most common host for solid-state lasers. YAG also has been used in jewelry, and may be doped with other rare earths for use in lasers or phosphors.

    YAG can be a puzzling acronym to decode if you think of crystals as chemical compounds. The Y stands for yttrium and the A for aluminum, but G is for garnet, which is a class of minerals with a particular cubic crystalline structure, not an element. In fact, the chemical formula of YAG, Y3 Al5 O12 , does not fit the usual definition of garnet. Dictionaries define a garnet as a silicate mineral consisting of three SiO4 groups plus three atoms of a divalent metal (A) and two atoms of a trivalent metal (B), with a chemical formula A3 B2 (SiO4 )3 . Yet YAG contains no silicate groups, and no divalent atoms. Both yttrium and aluminum are trivalent, but they combine with a dozen oxygen atoms to form a unit cell containing the same number of atoms as a unit cell of a standard garnet, producing a crystal with a garnet-like structure.

    The optical and mechanical properties that make YAG attractive for laser use include high thermal conductivity, high energy storage, and long fluorescence lifetime. For use in Nd:YAG lasers, YAG is doped with a molar concentration of roughly 1% neodymium atoms, which replace yttrium atoms in the crystal. Other rare earths including ytterbium, erbium, holmium, and thulium also can be doped into YAG to make lasers, and additional dopants may be added to aid energy transfer. An important emerging application for cerium-doped YAG is as a yellow-emitting phosphor used with blue LEDs to produce white light.

    Traditionally, YAG laser rods have been fabricated from crystalline boules, limiting their size. New processes can produce ceramic YAG in much larger sizes, for use as laser slabs, disks, or rods . Ceramic slab lasers have reached 100-kilowatt powers in experimental military lasers.

    FIGURE. 10-centimeter-square slab of ceramic Nd:YAG glows in 808-nm pump light during tests of the Lawrence Livermore National Laboratory's Heat Capacity Laser.


    March 4, 2012 5:11 PM by Jeff Hecht
    A fiber Bragg grating (FBG) is a fiber-optic device that strongly reflects a narrow band of wavelengths and transmits other light, like a thin-film mirror. A Bragg grating consists of thin layers of two dielectric materials, one with high refractive index and the other with a lower index, with each layer a quarter-wave thick at the wavelength to be reflected. Reflection from layer junctions a half-wavelength apart produces constructive interference at the selected wavelength. The more layers, the more reflective the structure becomes at the selected wavelength, and the narrower the reflected band. Other wavelengths are transmitted.

    Multilayer thin-film mirrors fabricated on bulk optics are Bragg gratings, but their reflective behavior depends on the angle of incidence as well as the layer thicknesses. In a fiber Bragg grating, the layers are normal (perpendicular) to light propagating along the fiber axis, fixing the angle of incidence and limiting reflection to a single narrow band. Fiber gratings are fabricated by illuminating special fibers made of light-sensitive glass with light from an ultraviolet laser, which passes through a light-scattering mask to form a series of light and dark interference fringes along the length of the fiber. The ultraviolet light breaks chemical bonds illuminated by the light fringes, changing the refractive index of the glass to create the grating within the fiber. A new process allows fabrication of fiber Bragg gratings continuously on freshly drawn fiber.

    The simplest fiber Bragg gratings have uniformly spaced layers along their length so they have high reflectivity in a narrow band, making them valuable as cavity mirrors in fiber lasers. Narrow-band fiber gratings also can be used in telecommunications systems, where they select one wavelength from a signal containing several optical channels and transmit the others, as shown in the figure. Another communications application is wavelength-selective time delays, with the grating chirped in spacing so different wavelengths are reflected at different points along the grating. Fiber Bragg gratings also can be used in sensing applications such as oil-well monitoring, where the peak reflected wavelength changes with temperature and strain.


    February 24, 2012 2:45 PM by Jeff Hecht
    DFB: Distributed Feedback. Feedback is essential to sustaining laser oscillation. In a Fabry-Perot laser, that feedback comes from light on the laser transition reflected back into the resonator by the cavity mirrors. Distributed feedback comes from a source distributed through the laser cavity, which in practice means a grating that scatters light back into the laser medium.

    The spacing of the lines in the grating determines the wavelength at which the feedback is strongest, and light amplification by stimulated emission concentrates emission at that wavelength. For a grating with line spacing D in a material with refractive index n, the peak wavelength l is

    where m is the order of the grating, usually 1 or 2. That means that for a first-order grating in InGaAsP, with n=3.4, a grating period of 228 nm is needed to generate 1550 nm light.

    Distributed feedback is used mostly in semiconductor diode lasers, where parallel lines etched in the active layer form a conventional diffraction grating (see figure). In fiber lasers, distributed feedback is produced by fabricating fiber Bragg gratings -- alternating regions of high and low refractive index perpendicular to the fiber axis -- in the optically pumped fiber. Note that distributed Bragg reflector (DBR) lasers are not considered DFB lasers because the feedback comes from gratings outside the gain region; DBR cavities are used for both diode and fiber lasers.

    FIGURE. Distributed feedback laser. (From Jeff Hecht, Understanding Fiber Optics, 5th edition)

    The big advantage of distributed feedback is its ability to stabilize lasers so they emit in a fixed-narrow range of frequencies, and DFB diode lasers were crucial for high-speed fiber-optic systems. A typical Fabry-Perot diode oscillates on multiple longitudinal modes spread across a few nanometers, but a temperature-stabilized DFB laser can prevent mode-hopping and limit oscillation to a single longitudinal mode with megahertz linewidth. DFB diode lasers provide the narrow linewidth essential for dense wavelength-division multiplexing (DWDM) and high-speed transmission. DFB lasers also can be tuned over limited ranges, and provide narrow-line emission for sensing and other demanding applications. can be used in other applications such as sensing.


    February 19, 2012 9:53 AM by David Pozerycki
    LASER: Light Amplification by Stimulated Emission of Radiation
    In 1957, Gordon Gould began his notes on the feasibility of a LASER by spelling out the acronym: light amplification by stimulated emission of radiation. The acronym, like the concept itself, was inspired by the maser, the laser's microwave counterpart, invented earlier by Charles Townes. Gould's catchy acronym gives the gist of the idea, but is not a full definition.

    Light sources from candles to LEDs emit light spontaneously, when excited atoms or molecules release energy on their own, so the photons are not identical and travel in various directions. Stimulated emission occurs when a photon stimulates an excited atom to emit an identical photon, traveling in the same direction and coherent with the first. The effect amplifies the light of the first photon, and the additional photon also can stimulate emission, producing a beam of identical photons.

    However, the acronym glosses over an essential point -- a laser also requires a resonant cavity to generate a beam. A pair of mirrors facing each other bounce light back and forth, so it oscillates through a laser medium, amplifying the light that emerges through the output mirror as the laser beam. Oscillation makes the laser beam directional, coherent and monochromatic, conditions considered essential for a laser.

    Townes and  Arthur Schawlow  saw the laser as a variant of the maser, and in 1958 they described it as an "optical maser" in a pioneering paper. The terms "laser" and "optical maser" competed for public favor, as the two sides competed for patent rights and credit for the invention. Schawlow was quick to point out that Gould's choice was a poor one because the device needed an oscillator in order to generate a beam. He said that meant the "laser was a loser," because it should be spelled out as "Light Oscillation by Stimulated Emission of Radiation."

    Schawlow had a point, but laser won the popularity race. In time, Townes won a Nobel Prize, and Gould got rich from his patents. But Schawlow got the most laughs.

Photonics Building Blocks

Photonics Acronyms from Jeff Hecht

Photonics Building Blocks decodes and explains the fundamentals of photonics. Starting with the acronyms so common in the world of lasers, optics, and photonics, Jeff Hecht writes about the basic technologies, the history, and the applications. It’s a blog that educates and amuses, and tries to make the world of photonics just a little bit easier to understand.

Recent Posts


Fri Jul 06 14:02:00 CDT 2012


Thu Jun 21 15:49:00 CDT 2012


Mon Jun 11 11:33:00 CDT 2012


Mon Jun 04 16:03:00 CDT 2012


Thu May 31 09:46:00 CDT 2012


Tue May 22 15:34:00 CDT 2012

Copyright © 2007-2014. PennWell Corporation, Tulsa, OK. All Rights Reserved.PRIVACY POLICY | TERMS AND CONDITIONS