Details: |
Resolution vs. field-of-view (FOV) is an age-old trade-off in the design of imaging systems like
cameras, microscopes, and telescopes. This trade-off occurs because the conventional system
designs inherently follow the point-to-point imaging model of the human eye. A common solution for
extending the FOV without loss of resolution is either to scan the object (e.g. slide scanners used by
pathologists) or the imaging system itself (e.g. panorama photography). We show that introducing a
designer phase mask in the aperture of a camera allows one to gather scrambled data that includes
information from within as well as from outside the physical sensor boundary. The unscrambling of
this coded raw image record to get an extended FOV image is an incomplete data problem, that can
be solved practically via a custom-made sparse optimization algorithm. The resultant imaging
system is analogous to a super-resolution structured illumination microscope (SIM), except that the
roles of real space and Fourier space are exchanged. While SIM systems use real-space structuring
of illumination to expand the Fourier space coverage of the image beyond the diffraction-limit at the
same FOV, our camera design allows diffraction-limited image resolution but effectively offers an
extended detector area. Phase structuring of the Fourier space as in our design offers an additional
advantage of realizing a system with single-shot operation unlike the SIM systems which require
multiple structured illuminations. I will conclude the talk by showing that the supposedly extra
information to form an extended FOV image arises out of the bit-depth dimension associated with
the sensor pixels. In the process, we introduce a raw-data domain empirical information measure,
which incorporates the usual space-bandwidth-product as well as the pixel bit depth. This
information measure is useful for assessing or predicting the performance enhancement offered by
generic computational imaging systems (including quantum imaging systems). |