MIT CIPS shows vital technology at all stages
CAMBRIDGE, MA--One benefit of going to a photonics conference is the feeling of excitement that arises from learning about a new technological achievement--and getting the sense that not many other people around the world yet know about it.
CAMBRIDGE, MA--One benefit of going to a photonics conference is the feeling of excitement that arises from learning about a new technological achievement--and getting the sense that not many other people around the world yet know about it. This feeling was in abundance at the sixth annual meeting of the Massachusetts Institute of Technology’s Center for Integrated Photonic Systems (MIT CIPS), held May 20–21, 2009.
The plenary sessions were both overviews that laid the groundwork for the more detailed presentations to follow. Bernie Meyerson, VP for strategic alliances and CTO of the IBM Systems and Technology Group (Armonk, NY), gave the first plenary on innovation in microelectronics and integrated photonics. His talk on the past and future of lithography and computer chips accented the fact that Moore’s Law (which, in various forms, predicts the doubling of either the number of transistors on a chip or the performance of a chip over a two-year time span) is obsolete, and has been for some time.
Meyerson described approaching limits in conventional lithography and in traditional chip design as reasons; in just one example, as feature dimensions on a computer chip get smaller, intrachip electrical communication actually becomes slower. A potential solution, though difficult to implement, is integrated photonics. Myerson discussed both the difficulties and the possibilities, such as a 3-D layering of electronics with a layer of photonics in between.
The second plenary, given by Bob Byer, professor at Stanford University (Palo Alto, CA) and co-director of the Stanford Photonics Research Center, addressed the status and future of solid-state lasers. He spoke extensively on the National Ignition Facility (NIF) laser at the Lawrence Livermore National Laboratory (Livermore, CA), a Nd:glass flashlamp-pumped laser/amplifier system that this year produced pulses with energies greater than 1 megajoule for the first time, and that will be used for nuclear-fusion experiments, including achieving breakeven, obtaining fundamental physics data, and making progress toward future fusion/fission hybrid reactors (see www.laserfocusworld.com/articles/346691).
In one of the many sessions (which ran in parallel tracks), Michael Watts of Sandia National Laboratories (Albuquerque, NM) detailed some potential uses of integrated photonics, as well as some centers of development. Innovations on silicon (Si) optical circuits have arisen from MIT, such as those from the university’s optical add-drop multiplexer project. At Sandia, researchers are pursuing Si microphotonics-based high-performance networks, as well as the use of Si photonics in sensors.
Researchers in Si photonics are looking at (in order of technological difficulty) long-haul metro, board-to-board, chip-to-chip, and core-to-core photonic communications, said Watts. He described an approach called integrated polarization diversity, in which an unpolarized signal enters a photonic circuit, where the signal is separated into two orthogonal polarizations and one polarization rotated so that it is parallel with the other. This allows processing of the signal and recombination of its polarizations to create an unpolarized output, negating the need for polarization-maintaining fiber.
He noted that in telecom, Si photonics are already beginning to have an impact. For datacom, the requirements are different: for example, much less optical power is needed in datacom. Supercomputers are at the petaflop (1015 floating-point operations per second) stage now, said Watts; who noted that the question being asked now is: What is needed for exaflop (1018 flops) processing?
Nicole DiLello of MIT discussed her group’s development of germanium (Ge) on Si photodiodes, noting how Ge is less expensive than II-V semiconductors and easier to integrate; selective growth of Ge reduces defects and allows better integration. Hyunil Byun of MIT talked about his group’s fabrication of a laser source for a photonic analog-to-digital converter that contains a Si-integrated femtosecond laser. The device, which emits at 1560 nm, demonstrates a timing jitter of less than 24 fs and a pulse-repetition rate of 394 MHz.
Textiles with functions
Yoel Fink, who a few years ago developed hollow waveguides coated on their insides with omnidirectional photonic-bandgap (PBG) coatings (see www.laserfocusworld.com/articles/209823), introduced the session on integrated multimaterial fibers. He showed a video of a PBG-coated hollow fiber--one of the early examples of a multimaterial fiber--being used to channel light from a carbon dioxide laser. The fiber, which has been in commercial production and is now sold at the rate of 20,000 per year for surgical applications, was shown being used by a surgeon to remove a tumor from the spinal cord of an infant--a delicate procedure that would have been far more risky if ordinary surgical tools were used.
Optoelectronic fibers are being developed that can contain polymers, glass, semiconductor, and metal. As Ofer Shapira of MIT described, such fibers can sense light along their length, along with the position of the light; these fibers can be woven together to create a two-dimensional image-sensing cloth. Two of these cloths can be placed in parallel to create a lensless imaging system.
Walter Margulis of Acreo (Kista, Sweden) presented a technique to create many parallel metallic electrodes along the interior length of a fiber by forcing molten metal at 200°C under pressure through holes in the fiber; in an added benefit, the fiber’s polarization properties can be controlled by tailoring the internal strain created by the electrodes. Organic LED fibers were the topic of a talk by Max Shtein of the University of Michigan (Ann Arbor, MI), who noted that unlike ordinary OLEDs, OLED-cloth light sources have virtually no change in color with viewing angle.
A presentation by Ramesh Raskar of MIT, called “Camera Culture,” was about the ubiquitous cameras found in cell phones and digital cameras, and how they can serve as a springboard for innovation. He detailed a so-called “coded-aperture” technique in which a mask placed in a camera’s optics allows the camera to collect 3-D information in a single shot; as a result, the viewer of the subsequent “photo” can change the focal plane at will, focusing in on near or far objects as desired.