85 research outputs found
Compressive light field photography using overcomplete dictionaries and optimized projections
Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.Natural Sciences and Engineering Research Council of Canada (NSERC postdoctoral fellowship)United States. Defense Advanced Research Projects Agency (DARPA SCENICC program)Alfred P. Sloan Foundation (Sloan Research Fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award
A switchable light field camera architecture with Angle Sensitive Pixels and dictionary-based sparse coding
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that-contrary to light field cameras today-our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.National Science Foundation (U.S.) (NSF Grant IIS-1218411)National Science Foundation (U.S.) (NSF Grant IIS-1116452)MIT Media Lab ConsortiumNational Science Foundation (U.S.) (NSF Graduate Research Fellowship)Natural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)Alfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award
Fast Sublinear Sparse Representation using Shallow Tree Matching Pursuit
Sparse approximations using highly over-complete dictionaries is a
state-of-the-art tool for many imaging applications including denoising,
super-resolution, compressive sensing, light-field analysis, and object
recognition. Unfortunately, the applicability of such methods is severely
hampered by the computational burden of sparse approximation: these algorithms
are linear or super-linear in both the data dimensionality and size of the
dictionary. We propose a framework for learning the hierarchical structure of
over-complete dictionaries that enables fast computation of sparse
representations. Our method builds on tree-based strategies for nearest
neighbor matching, and presents domain-specific enhancements that are highly
efficient for the analysis of image patches. Contrary to most popular methods
for building spatial data structures, out methods rely on shallow, balanced
trees with relatively few layers. We show an extensive array of experiments on
several applications such as image denoising/superresolution, compressive
video/light-field sensing where we practically achieve 100-1000x speedup (with
a less than 1dB loss in accuracy)
Toward Depth Estimation Using Mask-Based Lensless Cameras
Recently, coded masks have been used to demonstrate a thin form-factor
lensless camera, FlatCam, in which a mask is placed immediately on top of a
bare image sensor. In this paper, we present an imaging model and algorithm to
jointly estimate depth and intensity information in the scene from a single or
multiple FlatCams. We use a light field representation to model the mapping of
3D scene onto the sensor in which light rays from different depths yield
different modulation patterns. We present a greedy depth pursuit algorithm to
search the 3D volume and estimate the depth and intensity of each pixel within
the camera field-of-view. We present simulation results to analyze the
performance of our proposed model and algorithm with different FlatCam
settings
Fast Disparity Estimation from a Single Compressed Light Field Measurement
The abundant spatial and angular information from light fields has allowed
the development of multiple disparity estimation approaches. However, the
acquisition of light fields requires high storage and processing cost, limiting
the use of this technology in practical applications. To overcome these
drawbacks, the compressive sensing (CS) theory has allowed the development of
optical architectures to acquire a single coded light field measurement. This
measurement is decoded using an optimization algorithm or deep neural network
that requires high computational costs. The traditional approach for disparity
estimation from compressed light fields requires first recovering the entire
light field and then a post-processing step, thus requiring long times. In
contrast, this work proposes a fast disparity estimation from a single
compressed measurement by omitting the recovery step required in traditional
approaches. Specifically, we propose to jointly optimize an optical
architecture for acquiring a single coded light field snapshot and a
convolutional neural network (CNN) for estimating the disparity maps.
Experimentally, the proposed method estimates disparity maps comparable with
those obtained from light fields reconstructed using deep learning approaches.
Furthermore, the proposed method is 20 times faster in training and inference
than the best method that estimates the disparity from reconstructed light
fields
- …