2,400 research outputs found
Toward Depth Estimation Using Mask-Based Lensless Cameras
Recently, coded masks have been used to demonstrate a thin form-factor
lensless camera, FlatCam, in which a mask is placed immediately on top of a
bare image sensor. In this paper, we present an imaging model and algorithm to
jointly estimate depth and intensity information in the scene from a single or
multiple FlatCams. We use a light field representation to model the mapping of
3D scene onto the sensor in which light rays from different depths yield
different modulation patterns. We present a greedy depth pursuit algorithm to
search the 3D volume and estimate the depth and intensity of each pixel within
the camera field-of-view. We present simulation results to analyze the
performance of our proposed model and algorithm with different FlatCam
settings
Analysis and Optimization of Aperture Design in Computational Imaging
There is growing interest in the use of coded aperture imaging systems for a
variety of applications. Using an analysis framework based on mutual
information, we examine the fundamental limits of such systems---and the
associated optimum aperture coding---under simple but meaningful propagation
and sensor models. Among other results, we show that when thermal noise
dominates, spectrally-flat masks, which have 50% transmissivity, are optimal,
but that when shot noise dominates, randomly generated masks with lower
transmissivity offer greater performance. We also provide comparisons to
classical pinhole cameras
4D Frequency Analysis of Computational Cameras for Depth of Field Extension
Depth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography---wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational imaging modify the acquisition process to extend the DOF through deconvolution. Because deconvolution quality is a tight function of the frequency power spectrum of the defocus kernel, designs with high spectra are desirable. In this paper we study how to design effective extended-DOF systems, and show an upper bound on the maximal power spectrum that can be achieved. We analyze defocus kernels in the 4D light field space and show that in the frequency domain, only a low-dimensional 3D manifold contributes to focus. Thus, to maximize the defocus spectrum, imaging systems should concentrate their limited energy on this manifold. We review several computational imaging systems and show either that they spend energy outside the focal manifold or do not achieve a high spectrum over the DOF. Guided by this analysis we introduce the lattice-focal lens, which concentrates energy at the low-dimensional focal manifold and achieves a higher power spectrum than previous designs. We have built a prototype lattice-focal lens and present extended depth of field results
- …