Scientists have created a camera lens that is so small it is around the size of a grain of salt and could take higher-quality images of the insides of our bodies than current technology can.
Today, cameras are everywhere. They’re on laptops, phones, doorbells and more, and are so small thanks to the miniaturization of light intensity sensors.
But traditional imaging systems rely on a series of lenses that exist to make images less blurry, and these impose a limit on how small cameras can physically be.
Another approach is meta-optics. These make use of hundreds of thousands of tiny “nano-antennas,” small structures that can capture and re-emit light at a scale of nanometers. For reference, a sheet of paper is about 100,000 nanometers thick.
Cameras using this technology have been made before, but their images are typically poor or have narrow fields of view. Now, researchers have proposed what they call “neural nano-optics,” a mixture of the above technology with machine learning.
The researchers behind it say it is capable of producing full-color photos with a 40-degree field of view thanks in part to a deep learning computer algorithm that helps construct the images.
The quality of the images is “on par” with those taken by a commercially available compound lens which is 550,000 times bigger, the team say in their study.
Accompanying images do appear to show that the neural nano-optics images are similar to the compound lens ones, which include photos of bowls of fruit, a chameleon, and a flower.
According to a study outlining the technology, the tiny camera “could facilitate new capabilities” in medical images inside people’s bodies including brain imaging. The study even notes that many such cameras could be scattered like “optical ‘dust'” onto surfaces.
The new optical system containing the nano-antennas is just half a millimeter wide, according to a press release from the Princeton University Engineering School.
Ethan Tseng, a computer science Ph.D. student at the university who co-led the research, said in the press release: “It’s been a challenge to design and configure these little microstructures to do what you want.
“For this specific task of capturing large field of view RGB images, it’s challenging because there are millions of these little microstructures, and it’s not clear how to design them in an optimal way.”
To get around this, co-lead author Shane Colburn at the University of Washington’s Department of Electrical and Computer Engineering designed a computer simulation that automatically tested different configurations to help them find ones that worked.
Their research, titled “Neural nano-optics for high-quality thin lens imaging,” was published in the journal Nature Communications on November 29.