Digital optical holograms are the foundation of quantitative phase microscopy (QPM) systems for long-term live-cell imaging, in which illuminating light at low levels is used to capture cells at subcellular resolutions as they carry out their activities in their natural environment.
But “the quality of an optical hologram is often linked to the brightness of laser light,” says Steve Lee of the Australian National University (ANU; Canberra, Australia). “So we asked ourselves, how can we make an optical hologram in almost complete darkness? Usually if you form an optical hologram using extremely low light, the hologram will look very grainy.” This graininess is the shot-noise limit, which is a fundamental consequence of the quantum-mechanical nature of light, and a hard limit for many imaging systems.
However, Lee and his team found a way around this problem using machine learning. “We’ve shown that using very little light—almost pitch-black at submillisecond imaging speeds—we can still restore a hologram to close to perfect condition,” says Lee. Here, “almost pitch-black” refers to intensities in the submilliwatt-per-square-centimeter range.
The neural network used to achieve this feat, called Holo-UNet, starts with a noisy digital hologram, denoises it, and outputs the denoised hologram. “The machine masters the look of an ideal hologram through thousands of learning cycles,” says Zhiduo Zhang, one of the researchers. “After training, we then show the machine a hologram with lots of missing optical information . . . the machine ‘remembers’ how to digitally fill in those missing photons and so restore the hologram to near-perfect condition.”
In the experimental setups, light from an off-axis laser source was used to illuminate either 6-μm-diameter microspheres at a 532.8 nm wavelength or fibroblast cells (of type L929, which is a cell line from mouse connective tissue), which are on the order of 10 to 20 μm in size, at a 514 nm wavelength. The two types of QPM setup each had their own type of objective lens and camera: a scientific CMOS (sCMOS) camera from Thorlabs for the red-light imaging, and a camera with a CMOS sensor from Sony for the green-light imaging. Neutral-density filter were used to reduce the light intensity from 140 mW/cm2 to 5, 0.6, or 0.3 mW/cm2.
Video-rate imaging
Set to a 10 ms exposure time, the Thorlabs camera took video-rate images of the microspheres. The Sony sensor, set to a 200 μs exposure time, imaged the fibroblast cells at light densities of about 140 and 5 mW/cm2, the latter equivalent to about 300 photons/pixel and producing a shot-noise-limited image.
Holo-UNet learns the changes along parallel intensity fringes in the area of a phase object in the field of view and, after training, removes superfluous intensity changes, shot-noise-related intensity changes, and improves fringe visibility. The network was trained for either the microspheres or the fibroblasts by randomly selecting holograms (cropped to 512 × 512 pixels from the full-screen 1920 × 1080 pixel frame) at both low- and high-optical-power versions. About 800 cropped images were used to train the network. The results show large improvements in the shot-noise-limited images (see figure).
The study results could have important implications for biological imaging, as biological cells are very sensitive to light and can be easily damaged. “Our method can be used to track cells over long periods of time under almost complete darkness without worrying about light damage to the cells,” says Lee. “We can now also record holograms of live cells in less than a hundredth of a second with very little light, and see events like cell division with much greater clarity.”
REFERENCE
1. Z. Zhang et al., Biomed. Opt. Express (2020); https://doi.org/10.1364/BOE.395302.