http://localhost:4503/content/lfw/en/blogs/spectralbytes.html2017-01-02T21:12:02.894ZSpectral BytesAdobe Experience ManagerNavy laser weapon deploymentnoemail@noemail.orgJeff HechtThe U.S. Navy will deploy a high-energy laser weapon on the USS Ponce in fiscal 2014, chief of Naval Research <a href=";VIRIN" target="_blank">Rear Admiral Matthew Klunder announced April 8, 2013</a> at the Sea-Air-Space exposition. The Navy Laser Weapon System (LaWS) will be the first high-energy laser deployed for field use by the armed services. The Navy has tested the laser system against its prime targets, moving small surface boats, and <a href="" target="_blank">remotely piloted vehicles</a>.<br /><br />The at-sea deployment comes two years earlier than the Navy had planned. That may be a first in laser weapon development, where schedule slippage and cost overruns have been common. The <i>New York Times</i> reports LaWS cost just under $32 million, roughly two orders of magnitude less than the <a href="">Airborne Laser</a>, dropped from the fiscal 2011 budget after it failed to reach the required 200 km range.<br /><br /><br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="212" src="" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">NAVY LaWS on board a ship during tests of the laser weapon. (Image courtesy of the US Navy)</td></tr></tbody></table><br />LaWS is part of the new generation of <a href="">electrically powered solid-state laser weapons</a>, which Navy officials say offer two advantages. One is a "deep magazine," the ability to fire pulses as long as electrical power is available--and ships have plenty of power. The other is cost. Klunder said, "Our conservative data tells us a shot of directed energy costs under $1," compared to $100,000 or more to fire a missile. <br /><br />The choice of LaWS marks a big success for fiber lasers. When the Pentagon launched the Joint High Power Solid-State Laser (JHPSSL) program in 2002, developers focused on diode-pumped slab lasers, which at the time seemed the technology most likely to reach the 100 kW sought for defense against rockets, artillery, and mortars. JHPSSL reached that level in 2009, but fiber lasers have been catching up. The Naval Sea Systems Command reached 30 kW by combining the beams from six 5.5 kW industrial fiber lasers to shoot down a drone in 2010. LaWS has been upgraded since then, but Navy officials did not disclose the output power of the current system. <br /><br />The laser is not the only challenge. For the current version of LaWS, L-3 Integrated Optical Systems (Pittsburgh, PA) upgraded the pointing and tracking system, improving accuracy of the fine steering mirror and controls, and improving the software and user interface. "We took scientists out of the loop to make it operable by seamen," said Don Linnell, director of business development and strategy. The Navy considers that a must for fielding laser weapons., self-assemblynoemail@noemail.orgJeff HechtOne reward of exploring "Photonic Frontiers" every month for <i>Laser Focus World</i> is discovering new and emerging technologies that could have important impact. In investigating extreme-ultraviolet (EUV) lithography for my May Frontiers article, I discovered an intriguing concept called "directed self-assembly" which has surfaced since I last <a href="">covered EUV development four years ago</a>. Practical applications of directed self-assembly remain a ways off, but it could be crucial to sustaining the Moore's Law trend of shrinking electronic components on semiconductor chips.<br /><br />Simple self-assembly builds structures from the bottom up. On a nano-scale, it starts with molecular building blocks that assemble themselves into larger structures. An example is atoms or molecules adding themselves to bonding sites on the edge of a growing crystal. DARPA has studied ways to self-assemble small building-block modules into robots that could reassemble themselves in different configurations for other purposes, like how children reassemble Lego blocks into new structures. <br /><br />Robotic modules can be programmed to build desired structures, but external controls are needed to make atoms and molecules grow specific nanostructures. Directed self-assembly does that by applying forces from the top down to control assembly. For making semiconductor chips, the top-down control would come from patterns written by the photolithographic light source onto the material. <br /><br />Dan Herr became intrigued by the idea of the functional self-assembly of materials while working on lithographic photoresists at Research Triangle Park, NC-based Semiconductor Research Corp. <a href="">Resists</a> are central to photolithography, and their chemistry can limit the minimum feature size, edge roughness, and writing speed. Conventional resists were designed to match visible and near-ultraviolet light sources, but EUV lithography poses additional challenges because the photons carry an order of magnitude more energy, enough to blast cascades of electrons from the resist. <br /><br />"Directed self-assembly is a replacement for conventional resists," says <a href="" target="_blank">Herr, who is now developing the materials at the University of North Carolina, Greensboro</a> (Greensboro, NC). It's based on combining two polymers--one water-soluble, the other oil-soluble--which arrange themselves in regular patterns so the oil- and water-soluble parts can keep apart from each other. Light sources then direct their assembly by writing lines on the substrate for the polymers to start building upon.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="" height="320" width="231" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Directed self-assembly involves three steps: writing a pattern, depositing the two block copolymers, and removing one to form a pattern. (<i>Courtesy of Wikipedia</i>)</td></tr></tbody></table>The short-term focus is finding an alternative to conventional photoresists. But Herr says "the holy grail would be materials that self-assemble into shapes and structures that define active components of the circuit." That could change the rules for light sources as well as lithography., single photonsnoemail@noemail.orgJeff HechtStimulated emission makes lasers excellent sources of large numbers of coherent photons, which is fine for most applications. But <a href="">quantum information networks</a> are a problem because they work best with coherent photons that come one at a time, and lasers generally are not amenable to generating single photons. <a href="">Single-photon sources</a> have been developed for quantum computing, but they lack the coherence needed to create quantum entanglement at a distance using quantum entanglement.<br /><br />Now, a team at the Cavendish Laboratory at Cambridge University (Cambridge, England) led by Mete Atature has found a way to generate single photons with laser-like coherence. Their starting point was optical pumping of quantum dots, which is one way of producing single photons. They first fabricated a Schottky diode containing self-assembled indium-arsenide quantum dots, which could be individually addressed with a pump laser to generate single photons by resonance fluorescence. Resonant fluorescence does not optically excite the host material, reducing interactions in the solid that decrease coherence of emitted photons, but charge fluctuations and other interactions remain to degrade coherence. <br /><br />In <a href="" target="_blank"><i>Nature Communications</i></a> they report avoiding photon decoherence by weak laser excitation, which generates photons primarily by elastic scattering. This avoided charge fluctuations, and allowed them to generate single photons from one quantum dot that remained coherent with the excitation laser for more than three seconds. Taking advantage of this mutual coherence, they report they could "synthesize near-arbitrary coherent photon waveforms by shaping the excitation laser field." That, in turn, let them show that as long as the photons emitted by the quantum dot remained coherent with the pump laser field, the separate photons were "fundamentally indistinguishable," so quantum interference among them can create quantum entanglement at a distance. That makes it possible to combine quantum computing with quantum communications, producing a more powerful tool for tasks such as quantum cryptography. <br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="" height="274" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Ways to encode a qubit.</td></tr></tbody></table><br />"The ability to generate quantum entanglement and&nbsp; perform quantum teleportation between distant quantum-dot spin qubits with very high fidelity is now only a matter of time," says Atature. That's still a long way from science-fiction teleportation. However, the ability to generate single photons that maintain coherence well enough that they can be combined to produce novel waveforms may lead to real-world capabilities almost as attractive as avoiding airport lines., asteroid defensenoemail@noemail.orgJeff HechtCould lasers protect the Earth from wayward asteroids? A number of schemes have been proposed for <a href="">pushing asteroids gradually to move their orbits away from the planet</a>. Now, two California professors are proposing a bold scheme to build solar-powered space lasers powerful enough to evaporate a 500 m asteroid in about a year--or to make short work of a 17 m asteroid like the one that exploded near Chelyabinsk, Russia, on February 15. <br /><br /><a href="" target="_blank">Philip Lubin</a> of the University of California (Santa Barbara, CA) and Gary Hughes of California Polytechnic State University (San Luis Obispo, CA) began planning the project they call DE-STAR--for Directed Energy Solar Targeting of Asteroids and exploRation--a year ago. On February 14, they issued a <a href="" target="_blank">press release</a> timed to the close approach by asteroid 2012 DA14. They were as stunned by the Russian explosion as everyone else. <br /><br />Their bold proposal seeks to take advantage of the dramatic improvements in high-power diode lasers and solid-state lighting to build giant orbital phased arrays of lasers powered by electricity from huge solar panels. They envision starting with a desktop 1 m array called DE-STAR 0, then scaling up to a 10 m array called DE-STAR 1. They have proposed that NASA support a conceptual study of scaling up to a 10 km DE-STAR 4 array, powerful enough to vaporize a half-kilometer asteroid 150 million kilometers away. Even bigger versions could be used for <a href="">laser propulsion</a>; they estimate that a 1000 km DE-STAR 6 array could accelerate a 10 ton spacecraft close to the speed of light.<br /><br /><br /><br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="319" src="" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Future DE-STAR array samples composition of an asteroid as it propels an interplanetary spacecraft. (<i>Courtesy of Philip Lubin</i>) </td></tr></tbody></table>The scheme may sound fantastic, but Lubin says it violates no laws of physics and requires no "technological miracles." It merely envisions continuing technological progress at the rate of the past 50 years, which took us from the feeble LEDs and diode lasers of 1963 to today's powerful emitters. They assume photovoltaic cells that can convert 70% of incident solar energy into electricity, and diodes which can convert 70% of the input electrical power into light. <br /><br />Lubin doesn't think it will be easy. He worries about issues including the mass needed to build the giant array, and controlling output phase across the array with the precision needed to tightly focus the emission. But he predicts his assumptions will be considered "extraordinarily conservative and modest" in 30 to 50 years.<br /><br />That remains to be seen, but space-based solar-powered diode arrays are worth investigating. They could go beyond asteroid defense to could help move asteroids, collect valuable materials from them, or provide power resources in space--as well as inspiring some fun science-fiction stories., visions for space telescopesnoemail@noemail.orgJeff HechtNASA stumbled into a rare bit of good luck recently when the National Reconnaissance Office did some housecleaning. NRO decided that a pair of space-qualified 2.4 m telescopes dating from the late 1990s were no longer suitable for their original mission in <a href="">spy satellites</a>. So NRO offered the surplus optics to one of its poorer relations, NASA, for use in new space-based instruments.&nbsp; <br /><br />Unexpected hand-me-downs can bring opportunity, like the the piles of <i>Scientific American</i> and <i>Sky &amp; Telescope</i> that came with a house my family rented when I was in high school. Astronomers and NASA scientists are pondering what to do with the windfall. The f/8 Cassegrain telescopes lack instruments, electronics, or spacecraft, but NRO long ago paid for the optics, saving NASA serious money. The <a href="" target="_blank">Study on Applications for Large Space Optics workshop</a> held February 5 and 6, 2013, in Huntsville, AL heard and discussed 34 proposals for building new instruments around the mirrors. They will be narrowed to six proposals and submitted to NASA management in May.<br /><br />The range of ideas is impressive. <a href="">Adaptive optics can do wonders on the ground</a>, but ultraviolet astronomy must remain above the atmosphere, so three proposals call for studying the ultraviolet sky. Other common themes are spectroscopy, planetary science inside the solar system, and attempts to image challenging targets including extrasolar planets.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="146" src="" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Bare-bones surplus telescopes inherited by NASA (Government work not subject to copyright)</td></tr></tbody></table><br /><br /><br /><br />Some proposals are intriguing. Alfred McEwen of the University of Arizona (Tucson, AZ) envisions the Mars Orbiting Space Telescope, and&nbsp; Zachary Bailey of the Jet Propulsion Laboratory (Pasadena, CA) proposes "high-resolution surface science at Mars."&nbsp; Rebecca Farr of the NASA Marshall Space Flight Center (Huntsville, AL) proposes using both mirrors as a deep-space binocular telescope stationed at the Lunar L2 Lagrange point.<br /><br />Not everything is exactly a telescope. Abhijit Biswas of JPL wants to use a mirror as an optical communications node in space. J. H. Clemmons of the Aerospace Corp. (El Segundo, CA) wants to use one in a lidar to explore the Earth's thermosphere.&nbsp; Richard Eastes of the University of Central Florida (Orlando, FL) has a plan for "Atmospheric TeleConnections on Earth." <br /><br />There are <a href="" target="_blank">plenty more listed on the program</a>, and NASA will be recording the proceedings for later viewing. The ideas are not fully formed, of course, and some seem to duplicate others. But there are enough bright ideas to make one hope that NRO can find more goodies sitting in storage for its needy relatives.<br /><br />Source:&nbsp;, view for adaptive opticsnoemail@noemail.orgJeff Hecht<br />Adaptive optics has become standard on large ground-based telescopes because it offers far sharper images than otherwise obtainable. However, standard adaptive optics can compensate atmospheric turbulence only over small areas, so they don't let ground-based telescopes match the celestial panoramas imaged by the Hubble Space Telescope. Now a new generation of adaptive optics has demonstrated high-resolution imaging over a larger field of view with the Gemini South telescope in Chile.<br /><br />Proposed more than a decade ago by François Rigaut, now at Australian National University (Canberra, Australia), the <a href="" target="_blank">Gemini Multi-conjugate adaptive optics System (GEMS)</a> uses five laser guide stars and three deformable mirror to measure atmospheric distortion and compensate for its affects. Sampling at 500 to 1000 Hz, GEMS can compensate for turbulence over an area of sky 16 times larger than previously possible. <br /><br />The picture below tells the story, alternating images of the "Orion Bullets" region in the Orion Nebula taken with GEMS in December 28, 2012 and of the same region taken in 2007 with the previous-generation ALTAIR adaptive-optics system, which uses a single laser guide star. The larger field of view is 85 arcsec across. Without the adaptive optics, the telescope's resolution at the observation time was 0.8 to 1.1 arcsec. Adding GEMS improved resolution by a factor of ten to 0.084 to 0.103 arcsec. &nbsp;The bright spots are "bullets" of gas ejected from the core of the nebula that are ripping through molecular hydrogen at speeds to 400 km/s, leaving behind wakes of hot hydrogen.<br /><br />GEMS also benefits from processing enhancements, which use <a href="">tomographic techniques</a> to map air turbulence in three dimensions, and correct uniformly across the entire field of view. "This is huge when it's time for astronomers to reduce their data," says Adam Ginsburg, a graduate student at the University of Colorado (Boulder, CO), because observers often need to compare objects in the same field. <br /><br />Field size has long been a crucial limitation on adaptive optics. The 85-arcsec width of the GEMS image still falls well short of the more than nearly 200-arcsec width of the <a href="">Ultra Deep Field image</a> taken by the Hubble Space Telescope, but it's an important step. With Hubble now well into its third decade in orbit, astronomers need new ways to study the depths of the sky from the ground.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="319" src="" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: start;">Comparison of images of the same field in the Orion nebula recorded with GEMS and ALTAIR. The white "Orion Bullets" are fast-moving gas clouds leaving hot hydrogen in their wake. Their motion is fast enough to detect in the five years between the 2007 ALTAIR and the 2012 GEMS images.</span></td></tr></tbody></table>, refractive indexnoemail@noemail.orgJeff HechtThe latest example of the amazing versatility of metamaterials is the demonstration of one that has a refractive index of zero, <a href="" target="_blank">just reported in <i>Physical Review Letters</i></a>. Theorists had predicted the possibility of zero-refractive-index materials, and some similar effects have been reported, but the metal-clad glass waveguide developed by <a href="" target="_blank">Albert Polman's group</a> at the Center for Nanophotonics of the FOM Institute AMOLF (Amsterdam, Netherlands) with Nader Engheta of the University of Pennsylvania (Philadelphia, PA) is the first to have a near-zero index throughout.<br /><br />Zero-index materials, like <a href="">negative-index materials</a>, do not occur in nature, but can be built by assembling subwavelength elements into a structure designed to have the desired characteristics. The left part of the figure shows an electron microscope image of the metamaterial, a small slab of glass encased in silver forming a waveguide 200 nm wide and 2 &micro;m long. The strong interaction between the metal and the glass on that scale gives an entire waveguide an effective refractive index of 0 at 770 nm.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td><a href="" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="196" src="" width="320" /></a></td></tr><tr><td class="tr-caption" style="font-size: 13px;"><span style="font-size: small; text-align: start;">Electron microscope image of a zero-index waveguide, showing a silver-coated nanoscale glass slab 200 nm wide and 2 &micro;m long. The images at right compare the standing-wave pattern visible in a 400-nm-wide tube which disappeared in a 190-nm-wide tube, showing the material has a refractive index of zero at 770 nm. (</span><i style="font-size: medium; text-align: start;">Courtesy of Albert Polman</i><span style="font-size: small; text-align: start;">)</span></td></tr></tbody></table><br />The phase velocity of light is the speed of light divided by the refractive index of the medium, so phase velocity should be infinite for a zero-index material. Similarly, wavelength in a zero-index material should be infinite because it equals the wavelength in vacuum divided by refractive index. To study how the light behaved, Polman and colleagues used a technique they had developed earlier called "cathodoluminescence spectroscopy" to examine light waves in waveguides at various widths. When the index was above zero in a 400 nm waveguide, the light formed standing waves showing normal light propagation, as shown in the figure. But for a 190 nm waveguide the index was near zero, and the standing waves disappeared, as shown at right in the figure, indicating nearly constant phase and nearly infinite phase velocity and wavelength through the waveguide.<br /><br />Infinite phase velocity does not violate Einstein's cosmic speed limit because phase velocity cannot carry information. Group velocity, the speed of a modulated optical signal, decreases with the refractive index below one, eventually reaching zero for a zero-index material.<br /><br />That's not all that happens. "As the index approaches n=0 the losses increase, damping out the waves. The index then becomes a complex number of which the real part is 0," Polman told me in an email. That means no light is left to travel at infinite speed after a short distance. Wenshan Cai of Georgia Tech, who wrote a <a href="" target="_blank">Viewpoint for the online publication <i>Physics</i></a>,&nbsp;told me the light should travel about 50 to 100 &micro;m--far enough to be useful in integrated optics, but not over macroscopic distances.<br /><br />A <a href="">2011 report of zero refractive index</a> was based on different physics, combining two photonic-crystal materials, one with positive index and the other with negative index, so the net phase advance through the entire structure is zero. A key difference is that the building blocks of photonic-crystal materials are large enough to be seen by the wave, typically half a wavelength, but those of metamaterials are much smaller, so the incident wave responds to it as if it was a bulk, laser 'printing' builds DNAnoemail@noemail.orgJeff HechtThe concept of using lasers to synthesize DNA with a specified genetic sequence intrigued me so much that I tried to describe it in <a href="">my October Photonic Frontiers feature</a>. After receiving a <a href="" target="_blank">grant from the National Science Foundation</a>, the company behind the idea, Cambrian Genomics (San Francisco, CA), has released new details on the process, and my speculation about its nature turned out to be wrong. <br /><br />Previously, DNA synthesis has been a two-stage assembly process. First individual base pairs are assembled into "oligonucleotide" sequences of 60 to 100 base pairs. Then, a number of those longer chains are stitched together into the synthetic DNA. The process is time-consuming and costs 30 to 50 cents per base pair, a number which adds up for long sequences. I had thought they might be using lasers to manipulate the base pairs into place.<br /><br />Instead, Cambrian Genomics uses microarray cloning to mass-produce a million oligonucleotides in parallel, a process that has been tried before, but was hampered by the high error rates of microarray synthesis. To overcome that problem, Cambrian synthesizes large volumes of oligonucleotide fragments on microarrays, then uses massively parallel <a href="">DNA sequencing</a> to sort the different DNA variants and identify those with the desired sequence. Then, says Cambrian founder and CEO Austen Heinz, "we use laser catapulting, also known as laser-induced forward transfer, to eject clonal DNA populations," which were identified as having the desired sequences. The process is a variation on <a href="" target="_blank">laser capture microdissection</a>, which can excise part of a cell and move it to a desired location without damaging DNA. High-speed laser pulses then eject beads carrying the desired sequences in the right order to assemble into genes on a 384-well plates, as shown in the diagram.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="" style="margin-left: auto; margin-right: auto;"><img border="0" height="104" src="" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Cambrian Genomics process uses lasers to select oligonucleotides with the desired sequence. </td></tr></tbody></table>The goal, Cambrian wrote in a summary of its application for a phase-one Small Business Innovation Research (SBIR) grant, "is to be able to recover tens of thousands of sequence-verified oligonucleotides in several hours from sequencer flowcells."&nbsp; NSF announced on December 5, 2012, a $150,000 grant that will run through the first six months of 2013. Cambrian hopes that will open the door to disruptive reductions in the cost of DNA synthesis., technology getting ahead of the marketnoemail@noemail.orgJeff HechtPeter Jackson's decision to shoot <i>The Hobbit</i> at 48 frames per second brought optical technology into many holiday-party conversations, at least among technologists and movie buffs. Together with demonstrations of video screens with horizontal resolution of 8000 pixels, it raises the question of whether the cutting edge of large-screen display technology is getting too far ahead of the market.<br /><br />From the production side, it makes sense to record a movie in the best quality available at reasonable cost. It's easy to reduce resolution or frame rate to current mass-distribution standards. Theaters can charge extra for the highest quality screenings, as they have done for 3D. And archival copies should be compatible with the next generation or two of technologies.<br /><br /><center><iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe></center><br />From the display side, reviewers had mixed reactions. They found some parts spectacular, but sometimes too revealing. As Lucy O'Brien wrote on the gaming site, "<a href="" target="_blank">The problem with doubling the frame-rate in The Hobbit is a problem of scrutiny; you can see all its tricks.</a>"<br /><br />The push for higher video screen resolution comes largely from the consumer electronics industry. Aided by government mandates to convert to digital broadcasting, the industry persuaded the public to switch to flat-panel high-definition televisions showing 720 or 1080 lines, corresponding to widths of 1280 or 1920 pixels respectively. But the <a href="">public largely passed on 3D television</a>, and in uncertain times they have been slow to step up to larger screens, so manufacturers have slashed prices to bolster sales.<br /><br />Two ultra-high-definition formats are in development. One that doubles resolution is called 4K, for a nominal width of 4000 pixels (actually 3840 x 2160 pixels). An alternative called 8K quadruples resolution to a nominal width of 8000 pixels (actually 7680 x 4320 pixels). Some 4K equipment is available, and 8K has been demonstrated. However, big challenges remain, <a href="" target="_blank">writes Pete Putman of <i>Display Daily</i></a>, including lack of production equipment and cameras, high screen costs, and the need for much more bandwidth to carry the larger files.<br /><br />Unlike 3DTV, ultra-high-def won't give you a headache or <a href="">require special glasses</a>. It makes sense for future-proofing video production, and it could be a selling point for video venues or sports bars. &nbsp;But for now, ultra-high-def has gotten far ahead of the home television market, which is getting to like today's low, future for siliconnoemail@noemail.orgJeff Hecht<br />The Wiley-VCH journal <i>ChemPhysChem</i> issued an embargoed press release embargoed early on the morning of November 21, 2012, heralding "a bright future for silicon." Just eight hours later, they lifted the embargo, citing "early reporting" of the research by Brian Korgel of the University of Texas (Austin, TX) and colleagues.<br /><br />Embargo breaks often indicate hot stories, and the headline hinted at an important step toward the elusive goal of efficient light emission from silicon. Yet the next line was more muted: "Ordered nanocrystal arrays may provide a new platform to study and tailor the light-emitting properties of silicon." What is the real story?<br /><br />Silicon is a wonderful material for electronics, but its photonic uses have been hobbled by an indirect bandgap that makes it very hard for electrons dropping into the valence band to release their energy as photons. That leaves silicon far behind III-V compounds like gallium arsenide for LEDs and diode lasers. Yet silicon is far ahead of other semiconductors in electronics, and companies like Intel (Santa Clara, CA) want to integrate photonics into their integrated circuits.<br /><br />So far they have demonstrated <a href="">"silicon lasers" by optically pumping Raman lines in silicon</a> and <a href="">III-V diode laser chips bonded to silicon</a>. Both were important advances. But neither met the real goal--electrically powered emitters based on silicon that could be integrated into standard semiconductor chip production processes.<br /><br />In their <a href=""><i>ChemPhysChem</i> paper</a>, Korgel and colleagues take a different approach, tapping the bright luminescence produced by silicon quantum dots. They write that their major achievement is devising a chemical technique that causes self-assembly of "the first colloidal Si nanocrystal superlattices." Self-assembly is essential because individual dots are too small to fabricate by conventional photolithography, and transmission electron microscope images show the dots are closely spaced in regular face-centered-cubic arrangements (see photo).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div>TEM image silicon nanocrystals in the 111-oriented (c) and 112-oriented (d) plans, with depictions of the crystalline structures shown in insets. (<i>Courtesy Yixuan Yu et al.</i>,&nbsp;ChemPhysChem,<i>&nbsp;Wiley-VCH Verlag GmbH &amp; Co. KGaA</i>,&nbsp;<a href="" target="_blank"></a>&nbsp;[2012].&nbsp;<i>Reproduced with permission</i>)<br /><br />The authors say that covalent bonds with the hydrocarbon solvent make the silicon-nanocrystal superlattices stable to 350 degrees Celsius, higher than other similar superlattices. That's encouraging news, because self-organized nanocrystals are a promising fresh approach to structuring silicon to emit light more efficiently. But so far electrical excitation--sought for integrated optoelectronics--has far to go to match the efficiency of optical excitation of isolated silicon quantum dots. So Korgel is understandably optimistic about having "a new playground for understanding and manipulating the properties of silicon in new and unique ways," and is appropriately cautious in not claiming silicon lasers are just around the corner.<br />, solid-state lighting funnoemail@noemail.orgJeff Hecht<br />Solid-state lighting is a clean, green new market for optical technology, but it's hard to get very excited about white LEDs that merely replace older incandescent and fluorescent bulbs. Now, Philips is trying to make solid-state lighting fun with <a href="" target="_blank">wirelessly controlled color-tunable bulbs called "Hue"</a>.<br /><br />A Hue bulb screws into a standard light socket and contains red, green, and blue LEDs. A smartphone or iPad app controls the bulb's output through a wireless controller and a wireless receiver in the bulb. The app matches the LED outputs colors selected from a rainbow palette in the app, or from the user's favorite photos. Users can pick bright disco colors, shades of white from candlelight to sunlight, or anything in between.<br /><br />A $200 starter set including the controller and three bulbs sounds like an impulse buy at the Apple Store -- and that's exactly where Philips is selling it, as a fun gadget. A single 600-lumen Hue bulb will set you back $60, more than triple the price of a Philips Ambient bulb that emits a pleasant white light. But playing with colored lights is much more fun, as Philips shows in a <a href="" target="_blank">video</a>.<br /><br />The Hue isn't just a party light. You can set it to emit shades of white from a bright "energize" tone to start the morning to a warm "relax" shade to unwind in the evening. You can set each bulb to turn on and off when you want it. So it's an all-purpose adjustable light ready to put into any socket in the house, without costly rewiring.<br /><br />Philips is first to market, but company is coming. <a href="" target="_blank">LiFx</a> (San Francisco, CA) in September sought support <a href="" target="_blank">on Kickstarter to develop their own smart bulb</a>, and was surprised to receive $1.3 million in pledges when they had sought only $100,000. They have demonstrated a bench version and now are designing a production prototype, which will include a white LED as well as the RGB emitters.<br /><br />So far press attention has focused on controls and tunable colors, but I wonder what the green sources are. Philips is using a "lime green" LED from its LumiLEDs division because it gives better color rendering than standard green LEDs, but won't disclose the wavelength or composition. Is it a <a href="">hard-to-make green LED</a>, a <a href="">phosphor-LED hybrid</a>, or something else? &nbsp;If anybody out there has a spectrophotometer and a Hue at hand, it would be interesting to see a spectrum.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="" width="320" /></a></div><br />iPhone sets a Philips Hue bulb to "relax" for a calming evening. (<i>Courtesy of Philips Lighting</i>)<br />, recognizes fiber laser milestonenoemail@noemail.orgJeff Hecht<br />Fiber lasers and amplifiers can do incredible things, but the technology is not as new as you think. Half a century ago, Elias Snitzer and a handful of colleagues at the American Optical Company's Research Center in Southbridge, MA pioneered both technologies. On October 26, 2012, I attended the dedication of plaque recognizing the achievement as a <a href="" target="_blank">Milestone in electrical technology by the Institute of Electrical and Electronics Engineers</a>.<br /><br />Founded in the 19th century to make spectacles, <a href="" target="_blank">American Optical</a> in 1954 became the first company to try to develop practical fiber-optic imaging bundles, which were first demonstrated by academics and an independent inventor working on shoestring budgets. Initially funded by the Central Intelligence Agency to develop image scrambling bundles for secure messaging, AO later developed imaging bundles.<br /><br />AO hired Snitzer to work on fiber optics in 1959. At his job interview, he recognized the puzzling patterns in a fiber bundle as evidence of lateral modes, and later published the first analysis of single-mode transmission. Interested in the laser, Snitzer took advantage of AO's glass expertise to make a solid-state laser of glass rather than crystals. He formed barium crown glass doped with 2% neodymium oxide into a three-inch rod thinner than a millimeter, covered with a low-index glass cladding to improve light transmission. &nbsp;Pumping with a coiled flashlamp like the one Theodore Maiman used in the ruby laser, Snitzer demonstrated pulsed lasing in the stiff neodymium-doped fiber at room temperature in 1961.<br /><br />In 1963, Snitzer and Charles Koester amplified pulses by up to a factor of 50,000 in a meter-long fiber laser without reflective end coatings, coiled around a linear flashlamp. Their goal was to measure gain dynamics, but the demonstration also showed the potential of fiber amplifiers.<br /><br />The lack of good pump diodes kept fiber lasers and amplifiers from being practical until the 1980s. Snitzer played an important role in that development, developing doped fiber sensors, demonstrating 1480 nm diode pumping for erbium-fiber amplifiers and developing dual-core fibers now used in high-power fiber lasers. <a href="">Snitzer died in May</a>, but lived to see developments including <a href="">multi-kilowatt fiber lasers</a> and high-speed communications through <a href="">fiber amplifiers in the global telecommunications network</a>. Four of his children who attended the Milestone dedication were pleased by the recognition of the man they knew as "dad."<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="240" /></a></div><br />IEEE Milestone for fiber lasers and amplifiers, across the street from former American Optical headquarters in Southbridge, MA. (<i>Courtesy of Dick Whitney</i>), Prize for quantum opticsnoemail@noemail.orgJeff HechtThe award of the 2012 Nobel Prize in Physics to Serge Haroche and David Wineland is the latest in a series of Nobel Prizes honoring elegant experiments using light to illuminate fundamental physics. The <a href="" target="_blank">Swedish Academy of Sciences cited the two</a> "for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems." By examining individual photons and atoms, they resolved big questions about quantum mechanics.<br /><br />Physicists long wondered how seriously they should take the paradoxes that arise from applying quantum mechanics rigorously to the behavior of individual particles. Albert Einstein famously called the concept of entangled particles "spooky action at a distance," but recent experiments have shown that such entanglement is real, and can be used for <a href="">quantum encryption</a>. Other recent experiments have observed quantum behavior of individual particles, and manipulated that behavior so that quantum states can be superposed for purposes such as <a href="">quantum computing</a>.<br /><br />Haroche and Wineland developed complementary techniques for quantum manipulation of single particles. Haroche pioneered cavity quantum electrodynamics, which studies how an electromagnetically resonant cavity can affect quantum properties of an atom contained inside it, including spontaneous and stimulated emission. Working with microwave and optical cavities, his group measured photon properties without destroying the quantum states. Wineland and his colleagues used light to trap ions in ways that allowed them to transfer and superpose states of an ion. They were able to create single-quantum "Schrödinger's cat" states in the laboratory and watch them change from a quantum superposition to a classical mixture. Their work has opened the door to quantum computing and new types of optical clocks.&nbsp; <br /><br />Haroche holds the chair in Quantum Physics at the Coll&eacute;ge de France (Paris, France), and is well-known for his research in quantum optics and quantum computing, and for his major contributions to cavity quantum electrodynamics, the behavior of atoms and light in high-<i>Q</i> cavities. He is work has earned him a long list of awards, including the <a href="">Townes Award in 2007</a> from the Optical Society of America and the Herbert Walther Award from the German Physical Society and OSA in 2010. His deep roots in the optics community include doing his doctoral dissertation under Claude Cohen-Tannoudji and postdoctoral research under Arthur Schawlow, both future Nobel laureates.&nbsp; <br /><br />Wineland wrote his doctoral dissertation at Harvard University under Norman Ramsay, another Nobel Laureate, and heads the ion-storage group at the National Institute of Standards and Technology (Boulder, CO). He demonstrated the first laser cooling in 1978, and has used that technique to study quantum mechanics and develop applications. He demonstrated the first single-atom quantum logic gate in 1995, showing the potential of quantum computing, and later demonstrated entanglement of two and four ions. Other achievements include demonstrating quantum teleportation and a quantum logic atomic clock, which is now the world's most precise atomic clock. His long list of awards includes the Schawlow award in laser science from the American Physical Society, OSA's Frederick Ives award, and the <a href="">first Herbert Walther award in 2008</a>.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="318" /></a></div><div style="text-align: center;">David Wineland has won the 2012 Nobel Prize in Physics, along with Serge Haroche. (<i>Image courtesy of <br />Geoffrey Wheeler/NIST</i>)</div>, falls short of ignitionnoemail@noemail.orgJeff Hecht<br />The National Ignition Facility (NIF) will not meet its goal of igniting a fusion plasma before the end of September, the Lawrence Livermore National Laboratory (Livermore, CA) said on Friday. A spokeswoman says Livermore "will continue working toward achieving ignition." The laser is delivering the desired energy, but the target shots are not yielding the expected fusion energy.<br /><br /><a href="">NIF was declared complete on March 31, 2009</a>, after it had delivered 1.1 MJ pulses at 355 nm. The 192-beam system was designed to deliver 1.8 MJ pulses, which simulations indicated would be sufficient to ignite a pellet of deuterium-tritium fusion fuel, producing fusion reactions that yielded more energy than the input pulse. The Department of Energy set a target of reaching ignition by September 30, 2012--the end of the fiscal year.<br /><br />Wary of optical damage, Livermore ramped pulse power and energy slowly. The first 1.8 MJ pulse was not fired until March of this year. <a href="">On July 5, NIF delivered peak power of 500 tW to a target for the first time in a 1.85 MJ pulse</a>. From outside, it looked like NIF should be closing in on ignition.<br /><br />But now NIF has become the latest in a long list of fusion lasers that yielded experimental results well short of predictions. A news story in the September&nbsp;21&nbsp;issue of <i>Science</i> magazine reports that although computer models predict NIF shots should achieve ignition, <a href="" target="_blank">the yield of fusion energy from NIF experiments has so far reached only 0.1 of the ignition level</a>.<br /><br />The National Nuclear Security Administration (NNSA) has already begun studying its options. The first draft of a report is due October 1, with a final report due to Congress on November 30.<br /><br />Meanwhile, NIF continues firing shots that can produce temperatures and pressures far beyond anything previously possible on the surface of the Earth. Livermore fusion researchers will keep pressing for ignition, and NNSA weapon scientists will get additional shots for their simulations of nuclear explosions as part of the agency's Stockpile Stewardship program.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />NIF's laser bay, showing 96 of the 192, guides light up 3M solid-state bulbsnoemail@noemail.orgJeff Hecht<br />3M has added a new twist to solid-state lighting--embedding light guides in the outer shell of the bulb to redistribute light emission evenly across its surface like the venerable frosted-glass incandescent bulb.<br /><br />Solid-state lighting has been widely touted for its <a href="">outstanding energy efficiency</a>. LED bulbs now in hardware stores draw 13 W of electric power, emit as much visible light as 60 W incandescents, and have lifetimes of 25,000 hours, far beyond 1000-hour incandescents. But high prices and some subtle but significant problems are slowing their adoption.<br /><br />The 3M bulb is aimed at one of those subtle problems. LEDs emit directionally from a small area. Hot filaments and fluorescent tubes are omnidirectional, and although filaments are small, frosted incandescent bulbs scatter the light so it seems to radiate from entire surface. Directionality is good news for applications that want light concentrated in one direction, such as street lighting outdoors and downlighting in homes and offices. But it can be a problem in light fixtures in the line of sight, especially when the light comes from a small area. An example is a non-name solid-state lamp I bought earlier this year from a big-box hardware store. Light comes from a small zone where blue LEDs and yellow phosphor are mounted, not from the bulb's frosted surface, producing an unpleasant glare.<br /><br />Deep inside, the 3M bulb contains similar blue LEDs with yellow phosphors to generate directional white light. But instead of shining directly into the room, the light is coupled into light guides embedded in the bulb. Total internal reflection guides the light around the bulb to areas where the light is scattered out the surface and into the room, as shown in the figure. That reduces brightness to an acceptable level, making the bulb much more presentable in a light fixture.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="" width="400" /></a></div><br />The light guide in the 3M LED bulb carries light from the LED source to diffusing areas on the bulb surface. (<i>Courtesy of 3M</i>)<br /><br />The bulb, shown in the photo below, can't be mistaken for an incandescent. It needs slits to dissipate heat, a cooling problem that it shares with other LED bulbs, and requires heat sinks that add to <a href="">its environmental impact</a>. But the design is an innovative step in the right direction, making LED lamps an attractive piece of decor rather than an efficient eyesore.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="213" /></a></div><br />3M's Advanced LED light distributes light like an incandescent bulb. (<i>Courtesy of 3M</i>)<br />, PULSE programnoemail@noemail.orgJeff HechtUltrafast laser research has produced some elegant science, from slicing time into incredibly thin slivers to generating combs of frequencies uniformly spaced across a wide band of the spectrum. These capabilities, in turn, have led to a similarly wide range of applications, including transferring time and frequency standards, measuring short intervals of time, and producing pulses so short that they generate extremely high peak powers with only modest amounts of energy. <br /><br />However, ultrafast lasers traditionally have been bulky and complex things, custom-assembled on optical tables and delicately aligned in a laboratory. That complexity makes it hard to realize many potential practical applications such as putting frequency combs in space to boost the precision of GPS systems or to measure stellar spectra with extreme precision. Now the Defense&nbsp; Advanced Research Projects Agency (Arlington, VA) is trying to do something about the problem by creating the <a href="" target="_blank">Program in Ultrafast Laser Science and Engineering</a>.<br /><br />DARPA is not the first to think of making smaller and more durable ultrafast lasers. I mentioned the need for <a href="">"robust frequency combs" for telecommunications systems or space-based instruments in the January Photonic Frontiers</a>. A web search four pages which include the phrase "rugged femtosecond laser," but all of them cite <a href="">an Army contract awarded last year to Arbor Photonics</a>. However, such references are few and far between, and Google could not find a single page using the phrase "rugged frequency comb" (or combs) when I was writing this blog. <br /><br /><br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="" width="320" /></a></div><br /><br /><br /><br />Shrinking the size and improving the robustness of ultrafast lasers is a big challenge, but success could pay off in important ways. DARPA cites some potential military applications that require rugged sources. One is using the time stability of the microwave-band repetition rate of a femtosecond laser to greatly reduce the close-to-carrier phase noise in a microwave oscillator. Others include transferring time or frequency measurements across the spectrum, and generating high-flux isolated attosecond pulses. Civilian science and technology also would benefit from compact&nbsp; sources of ultrashort pulses.<br /><br />As is normal with DARPA, success is not guaranteed, but the payoff could be high. In fact, somebody at DARPA surely should have already earned credit in the Pentagon bureaucracy for exceptional skill in acronym creation. Program in Ultrafast Laser Science and Engineering neatly translates into an entirely appropriate acronym -- PULSE. <br />, nanolasersnoemail@noemail.orgJeff HechtMy Photonic Frontiers article coming up in the September issue of <i>Laser Focus World</i> describes recent progress on nanoscale lasers, having volumes smaller than a cubic wavelength. Such emerging technologies are fascinating, but also raise a peculiar problem for those of us who write about them: what do we call the things?<br /><br />Some groups call their nanoscale lasers "spasers," an acronym for <a href="">Surface Plasmon Amplification by the Stimulated Emission of Radiation</a>. Surface plasmons are involved in the process, and the catchy term has gained its own Wikipedia entry, some 266,000 hits in a web search, and a fair amount of press coverage even before a paper in the July 27 issue of <i>Science</i>. Score a few points for savvy PR.<br /><br />But other researchers prefer more general terms like "nanolasers." One reason is that the acronym for spaser defines a specific process--surface plasmon amplification by stimulated emission of radiation. Yet it's not clear that all nanoscale lasers demonstrated so far rely in that process, and some researchers wonder how stimulated emission in a tiny piece of semiconductor can amplify a surface plasmon, which is a group of oscillating electrons on a conductive surface.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="352" src="" width="400" /></a></div>A second reason is more philosophical, that "laser" has become a generic term. As Shaya Fainman of the University of California at San Diego (La Jolla, CA) told me, "any time I see light amplification by stimulated emission, I call it a laser." By that logic, if a nanoscale device is amplifying light by stimulated emission, it's a laser.<br /><br />There are points to be made for both sides, but there also is another dimension to the discussion--defining a new term can be part of claiming credit for a discovery. The International Commission on Zoological Nomenclature has elaborate rules on the proper naming of living or extinct animal species. No such rules exist in physics, so terms compete on their own merits. Interestingly, Gordon Gould's term "laser" won the popularity contest over Charles Townes' original suggestion of "optical maser," but the Nobel Prize went to Townes.<br /><br />Who eventually will be credited with inventing nanoscale lasers remains to be determined. For now, I'm using "nanolaser" as a generic term for nanoscale laser, as I did in an <a href="">earlier article</a>.&nbsp;But I'm also watching for future developments in the fast-moving, falls flat for Olympicsnoemail@noemail.orgJeff HechtThe past few years have seen some impressive innovations in three-dimensional displays. New digital projectors have made 3D movies come alive in theaters, and high-resolution flat-panel displays can bring 3D television to homes. Digital image processing and 3D helped make <i>Avatar</i> the highest-grossing movie of all time. At the peak of 3D enthusiasm, some in Hollywood predicted that soon 3D production would become standard for movies. <br /><br />Live sports was supposed to be the next great frontier for 3D, and Panasonic and Olympic Broadcasting Services sent crews to London to record 200 hours of the Summer Olympics in 3D. But the effort seems to have fallen flat. Chris Chinnock reports on <a href=""><i>Display Daily</i></a> that the BBC logged an average UK audience of 24 million people for the opening ceremonies, only 111,000 households watched the 3D simulcast, a figure he called "pretty dismal." My own informal poll of a small newsgroup discussing the Olympics found no one who cares about 3D, and one who had never bothered to set up the 3D on his Playstation 3.&nbsp; <br /><br />Why did 3D fall flat for the world's biggest sport spectacular? It's tempting to blame the lack of promotion, the difficulty of finding 3D coverage, and the decision to delay all 3D broadcasts by 24 hours. But the truth is that few people outside of the consumer industry show much interest in 3D television. Properly done, 3D can be fun&mdash;for a limited time. I enjoyed playing with a 3D set in the video store, but the amusement wore off in 15 minutes. I can see where the 3D versions of some movies might be worth a few extra dollars in the theater. But the monsters in the lap gimmick gets old fast, viewers dislike the <a href="">active shutter glasses for 3D televisions</a>, and too much intense 3D can cause eyestrain and nausea. <br /><br /><br /><br /><br /><img src="" /><br /><span style="font-size: small;">A refreshable holographic image of an F-4 Phantom jet is created on a photorefractive polymer. (<i>Courtesy of the University of Arizona</i>)</span><br /><br /><br />New technology from <a href="">NLT Technologies</a> (Kawasaki, Japan) presents different views to both eyes of several people, allowing them to see depth by the parallax effect without special glasses. However, that's no panacea because the brain senses depth in multiple ways, and conflicts between different cues lead to eyestrain, headache, and nausea. Perhaps we'll have to wait for further development of <a href="">holographic video</a>, Plutonoemail@noemail.orgJeff Hecht<br />In July, the Hubble Space Telescope spotted the fifth moon of Pluto, an icy ball 10 to 25 km across that was just a pinprick of light in the image. <a href="">Much of the press coverage focused on whether that discovery should make Pluto a full-scale planet</a>. But I was far more interested in Pluto, its moons, and the amazing optical feat of finding something so small and so far away.<br /><br />My interest in optics grew from a fascination with astronomy. I'm old enough to remember the 1978 discovery of Pluto's largest moon Charon. The discovery images show <a href="">a small bump on the fuzzy ball of Pluto, recorded on a photographic plate by a ground telescope</a>. Comparison of a series of images showed that the bump moved as the unresolved moon orbited Pluto. In the days before adaptive optics, seeing even that much seemed amazing.<br /><br />Hubble resolved Pluto and Charon soon after its launch in 1990. It was a badly needed success for Hubble in its troubled early years, but scattered light in the background of the photo clearly shows the spherical aberration of the telescope's primary mirror. Pluto and Charon are both blurry and diffuse, but the photo clearly shows them as separate worlds, with Pluto clearly the larger and Charon roughly half its size. Once NASA added corrective optics to fix the spherical aberration, the <a href="">Faint Object Camera produced much sharper photos in 1994</a>.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="256" /></a></div><br /><span style="background-color: white;">Further upgrades have made Hubble even better. In 2005, it spotted two roughly 100 km moons, later named Nix and Hydra. Last year, astronomer Mark Showalter of the SETI Institute (Mountain View, CA) began a series of Hubble observations to check for other little moons which might scatter dust into the path of the New Horizons spacecraft when it visits Pluto in July 2015. Earlier this month, Showalter downloaded a new batch of Hubble data, and in an hour was on the phone reporting the discovery. A few days later, he told me "I'm still struck by just what an amazing instrument Hubble is. This little object, [called] P5, is fainter than Pluto by a factor of 100,000 and separated by one arc second."</span><br /><br />It's amazing and wonderful. And so far Hubble's images show New Horizons is on a good path to avoid any dangerous dust, so we can see close-ups of Pluto three years from, 500

Cannot serve request to /content/lfw/en/blogs/spectralbytes/_jcr_content.feed on this server

ApacheSling/2.2 (Day-Servlet-Engine/4.1.52, Java HotSpot(TM) 64-Bit Server VM 1.7.0_51, Windows Server 2012 6.2 amd64)