7,291 research outputs found
Variational multi-image stereo matching
In two-view stereo matching, the disparity of occluded pixels cannot accurately be estimated directly: it needs to be inferred through, e.g., regularisation. When capturing scenes using a plenoptic camera or a camera dolly on a track, more than two input images are available, and - contrary to the two-view case -pixels in the central view will only very rarely be occluded in all of the other views. By explicitly handling occlusions, we can limit the depth estimation of pixel (P) over right arrow to only use those cameras that actually observe (p) over right arrow. We do this by extending variational stereo matching to multiple views, and by explicitly handling occlusion on a view-by-view basis. Resulting depth maps are illustrated to be sharper and less noisy than typical recent techniques working on light fields
Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions
Depth estimation is a fundamental problem for light field photography
applications. Numerous methods have been proposed in recent years, which either
focus on crafting cost terms for more robust matching, or on analyzing the
geometry of scene structures embedded in the epipolar-plane images. Significant
improvements have been made in terms of overall depth estimation error;
however, current state-of-the-art methods still show limitations in handling
intricate occluding structures and complex scenes with multiple occlusions. To
address these challenging issues, we propose a very effective depth estimation
framework which focuses on regularizing the initial label confidence map and
edge strength weights. Specifically, we first detect partially occluded
boundary regions (POBR) via superpixel based regularization. Series of
shrinkage/reinforcement operations are then applied on the label confidence map
and edge strength weights over the POBR. We show that after weight
manipulations, even a low-complexity weighted least squares model can produce
much better depth estimation than state-of-the-art methods in terms of average
disparity error rate, occlusion boundary precision-recall rate, and the
preservation of intricate visual features
Neural View-Interpolation for Sparse Light Field Video
We suggest representing light field (LF) videos as "one-off" neural networks (NN), i.e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views. Initially, this sounds like a bad idea for three main reasons: First, a NN LF will likely have less quality than a same-sized pixel basis representation. Second, only few training data, e.g., 9 exemplars per frame are available for sparse LF videos. Third, there is no generalization across LFs, but across view and time instead. Consequently, a network needs to be trained for each LF video. Surprisingly, these problems can turn into substantial advantages: Other than the linear pixel basis, a NN has to come up with a compact, non-linear i.e., more intelligent, explanation of color, conditioned on the sparse view and time coordinates. As observed for many NN however, this representation now is interpolatable: if the image output for sparse view coordinates is plausible, it is for all intermediate, continuous coordinates as well. Our specific network architecture involves a differentiable occlusion-aware warping step, which leads to a compact set of trainable parameters and consequently fast learning and fast execution
Variational Disparity Estimation Framework for Plenoptic Image
This paper presents a computational framework for accurately estimating the
disparity map of plenoptic images. The proposed framework is based on the
variational principle and provides intrinsic sub-pixel precision. The
light-field motion tensor introduced in the framework allows us to combine
advanced robust data terms as well as provides explicit treatments for
different color channels. A warping strategy is embedded in our framework for
tackling the large displacement problem. We also show that by applying a simple
regularization term and a guided median filtering, the accuracy of displacement
field at occluded area could be greatly enhanced. We demonstrate the excellent
performance of the proposed framework by intensive comparisons with the Lytro
software and contemporary approaches on both synthetic and real-world datasets
Depth Estimation Through a Generative Model of Light Field Synthesis
Light field photography captures rich structural information that may
facilitate a number of traditional image processing and computer vision tasks.
A crucial ingredient in such endeavors is accurate depth recovery. We present a
novel framework that allows the recovery of a high quality continuous depth map
from light field data. To this end we propose a generative model of a light
field that is fully parametrized by its corresponding depth map. The model
allows for the integration of powerful regularization techniques such as a
non-local means prior, facilitating accurate depth map estimation.Comment: German Conference on Pattern Recognition (GCPR) 201
- …