222 research outputs found
Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths
We report the first computational super-resolved, multi-camera integral
imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR
Lepton cameras was assembled, and computational super-resolution and
integral-imaging reconstruction employed to generate video with light-field
imaging capabilities, such as 3D imaging and recognition of partially obscured
objects, while also providing a four-fold increase in effective pixel count.
This approach to high-resolution imaging enables a fundamental reduction in the
track length and volume of an imaging system, while also enabling use of
low-cost lens materials.Comment: Supplementary multimedia material in
http://dx.doi.org/10.6084/m9.figshare.530302
Free-Viewpoint Images Captured Using Phase-Shifting Synthetic Aperture Digital Holography
Free-viewpoint images obtained from phase-shifting synthetic aperture digital holography are given for scenes that include multiple objects and a concave object. The synthetic aperture technique is used to enlarge the effective sensor size and to make it possible to widen the range of changing perspective in the numerical reconstruction. The lensless Fourier setup and its aliasing-free zone are used to avoid aliasing errors arising at the sensor edge and to overcome a common problem in digital holography, namely, a narrow field of view. A change of viewpoint is realized by a double numerical propagation and by clipping the wave field by a given pupil. The computational complexity for calculating an image in the given perspective from the base complex-valued image is estimated at a double fast Fourier transform. The experimental results illustrate the natural change of appearance in cases of both multiple objects and a concave object
Variational Disparity Estimation Framework for Plenoptic Image
This paper presents a computational framework for accurately estimating the
disparity map of plenoptic images. The proposed framework is based on the
variational principle and provides intrinsic sub-pixel precision. The
light-field motion tensor introduced in the framework allows us to combine
advanced robust data terms as well as provides explicit treatments for
different color channels. A warping strategy is embedded in our framework for
tackling the large displacement problem. We also show that by applying a simple
regularization term and a guided median filtering, the accuracy of displacement
field at occluded area could be greatly enhanced. We demonstrate the excellent
performance of the proposed framework by intensive comparisons with the Lytro
software and contemporary approaches on both synthetic and real-world datasets
Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions
Depth estimation is a fundamental problem for light field photography
applications. Numerous methods have been proposed in recent years, which either
focus on crafting cost terms for more robust matching, or on analyzing the
geometry of scene structures embedded in the epipolar-plane images. Significant
improvements have been made in terms of overall depth estimation error;
however, current state-of-the-art methods still show limitations in handling
intricate occluding structures and complex scenes with multiple occlusions. To
address these challenging issues, we propose a very effective depth estimation
framework which focuses on regularizing the initial label confidence map and
edge strength weights. Specifically, we first detect partially occluded
boundary regions (POBR) via superpixel based regularization. Series of
shrinkage/reinforcement operations are then applied on the label confidence map
and edge strength weights over the POBR. We show that after weight
manipulations, even a low-complexity weighted least squares model can produce
much better depth estimation than state-of-the-art methods in terms of average
disparity error rate, occlusion boundary precision-recall rate, and the
preservation of intricate visual features
A Joint Intensity and Depth Co-Sparse Analysis Model for Depth Map Super-Resolution
High-resolution depth maps can be inferred from low-resolution depth
measurements and an additional high-resolution intensity image of the same
scene. To that end, we introduce a bimodal co-sparse analysis model, which is
able to capture the interdependency of registered intensity and depth
information. This model is based on the assumption that the co-supports of
corresponding bimodal image structures are aligned when computed by a suitable
pair of analysis operators. No analytic form of such operators exist and we
propose a method for learning them from a set of registered training signals.
This learning process is done offline and returns a bimodal analysis operator
that is universally applicable to natural scenes. We use this to exploit the
bimodal co-sparse analysis model as a prior for solving inverse problems, which
leads to an efficient algorithm for depth map super-resolution.Comment: 13 pages, 4 figure
Light field super resolution through controlled micro-shifts of light field sensor
Light field cameras enable new capabilities, such as post-capture refocusing
and aperture control, through capturing directional and spatial distribution of
light rays in space. Micro-lens array based light field camera design is often
preferred due to its light transmission efficiency, cost-effectiveness and
compactness. One drawback of the micro-lens array based light field cameras is
low spatial resolution due to the fact that a single sensor is shared to
capture both spatial and angular information. To address the low spatial
resolution issue, we present a light field imaging approach, where multiple
light fields are captured and fused to improve the spatial resolution. For each
capture, the light field sensor is shifted by a pre-determined fraction of a
micro-lens size using an XY translation stage for optimal performance
- …