Full-Color Imaging with Learned Meta-Optics

Researchers of Tunoptix, University of Washington and Princeton University demonstrated that combining computer learning algorithms with meta-optics – 2D arrays of subwavelength scatterers – can miniaturize image sensors without sacrificing their performance. Specialized scatterer designs that can compensate for the chromatic dispersion have been used for broadband focusing, but these lenses are fundamentally limited to very small physical apertures and low numerical aperture. These limits can potentially be overcome by leveraging a computational-imaging paradigm, in which the meta-optics themselves are not directly producing an image but instead function synergistically with post-processing computation to recover a high-quality image. By designing a meta-optic to blur all colors in an identical manner at the sensor plane, the researchers can calibrate this blur, and then apply a low-latency deconvolution routine to extract a full-color image from the raw sensor data. By relying on a fully differentiable meta-optic model and reconstruction algorithm, they designed the structural parameters of the meta-optic as well as the settings of a computational reconstruction network, driven by an image quality loss at the end of the imaging pipeline. This made it possible to demonstrate high-quality imaging comparable to a compound optic with six refractive lenses and achieved a reduction of five orders of magnitude in camera volume.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert