Generative adversarial networks (GANs) have recently received a bit of notoriety for their ability to produce “deep fakes”: images, video, and audio that can mimic reality. For example, false social media accounts have been created with realistic profile photos that had been created with GANs.
The ability of artificial intelligence (AI) models to generate realistic images can be put to more productive use, as in their use to train AI routines for image segmentation. They can also be used to “sharpen” real images, enhancing their detail beyond the optical resolution of the imaging system. Now, researchers at Harvard University (Cambridge, MA) and the University of Massachusetts-Dartmouth (North Dartmouth, MA) have demonstrated a GAN-based approach that allows budget-priced optics to produce high-resolution cellular images.
What your computer can do for you
Microscope images are essential elements of many front-line diagnostic procedures, but key information can be hidden in the fine details. Higher resolution microscopes and advanced imaging techniques can reveal those details, but at a significant cost, both in the instrument itself and in physical infrastructure and personnel training to support the image acquisition. Ideally, an inexpensive optical instrument would produce the finely detailed images necessary for clinical diagnoses. Professor Y. Shrike Zhang, at Harvard Medical School, recognized the potential value of “obtaining higher resolutions approaching those provided by some conventional, high-end microscopy—but without the high costs.” He turned to Daniel Shao and his team at the University of Massachusetts, looking to leverage their expertise in AI image processing.
Shao’s team addressed the challenge with a GAN. The GAN approach plays off two competing models against each other. One, the discriminator, is trained with a set of real images. After training, when the discriminator is presented with an image file it will assign it a kind of “realism score,” a number between 1 and 0 indicating whether it believes an image to be, respectively, real or fake. The second model is the generator. As its name suggests, a generator creates something; in this case, the “something” is an image. That image is presented to the discriminator, which gives the generator a realism score. The generator trains itself, improving that score. The training cycles alternate, with the generator static as the discriminator trains, and the discriminator static during generator training. The result is a generator able to produce “realistic” images.
Machine learning approaches generally require very large databases to train any given model. To offset that requirement, the researchers began with an existing model, one trained on standard, non-cellular images. That existing model recovers high-quality images quickly and smoothly, but its lack of training on biomedical images means it may not recover cell-specific textures or patterns. So they trained an analog of that existing model on their own sets of regular microscopy images. They then combined the pre-existing model and their newly trained model to create a hybrid with both smooth and rapid performance and the ability to reconstruct features unique to biomedical, cell-based images.
Low cost, high resolution?
The team had previously developed a bare-bones mini-microscope, built from a conventional inexpensive webcam. Spacers between the lens and CMOS sensor vary the system’s magnification in five steps from 2X to 40X. The hardware cost is less than ten dollars. They also used a “regular” optical microscope, a Zeiss Axio Observer D1. With both microscopes they acquired images of various cell types, including, for example, A549 human lung carcinoma cells, and HepG2 human hepatocellular carcinoma cells. They downsampled the images and then used their hybrid GAN model to reconstruct high-resolution versions (see figure).
They used two metrics to compare the performance of their new model with other super-resolution AI methods: the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). “The PSNR measures the pixel-wise difference between super-resolved images and ground-truth high-resolution images,” explained Shao, “while SSIM indicates the degree to which the system produces ‘vivid, high-fidelity, and visually pleasant’ images that are preferred by human visual systems.” Although the performance of the new resolution enhancement model is not uniform over all cell types, it did perform better than all six of the tested previous state-of-the-art models. That held true for all cell types, and for images acquired with both the regular microscope and the mini-microscope.
Although the team has not yet attempted to improve resolution of as-acquired images, their results are promising enough that Zhang is excited about “the ability to achieve high resolutions using a very cheap mini-microscope; critical in many resource-limited settings where they simply cannot afford a high-end scope.” With additional refinement of the algorithm and further practical demonstrations, he said, “The ultimate stage might be an inexpensive and powerful system of optimal hardware/software.”