3,081 research outputs found
Fast and Accurate Depth Estimation from Sparse Light Fields
We present a fast and accurate method for dense depth reconstruction from
sparsely sampled light fields obtained using a synchronized camera array. In
our method, the source images are over-segmented into non-overlapping compact
superpixels that are used as basic data units for depth estimation and
refinement. Superpixel representation provides a desirable reduction in the
computational cost while preserving the image geometry with respect to the
object contours. Each superpixel is modeled as a plane in the image space,
allowing depth values to vary smoothly within the superpixel area. Initial
depth maps, which are obtained by plane sweeping, are iteratively refined by
propagating good correspondences within an image. To ensure the fast
convergence of the iterative optimization process, we employ a highly parallel
propagation scheme that operates on all the superpixels of all the images at
once, making full use of the parallel graphics hardware. A few optimization
iterations of the energy function incorporating superpixel-wise smoothness and
geometric consistency constraints allows to recover depth with high accuracy in
textured and textureless regions as well as areas with occlusions, producing
dense globally consistent depth maps. We demonstrate that while the depth
reconstruction takes about a second per full high-definition view, the accuracy
of the obtained depth maps is comparable with the state-of-the-art results.Comment: 15 pages, 15 figure
Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions
Depth estimation is a fundamental problem for light field photography
applications. Numerous methods have been proposed in recent years, which either
focus on crafting cost terms for more robust matching, or on analyzing the
geometry of scene structures embedded in the epipolar-plane images. Significant
improvements have been made in terms of overall depth estimation error;
however, current state-of-the-art methods still show limitations in handling
intricate occluding structures and complex scenes with multiple occlusions. To
address these challenging issues, we propose a very effective depth estimation
framework which focuses on regularizing the initial label confidence map and
edge strength weights. Specifically, we first detect partially occluded
boundary regions (POBR) via superpixel based regularization. Series of
shrinkage/reinforcement operations are then applied on the label confidence map
and edge strength weights over the POBR. We show that after weight
manipulations, even a low-complexity weighted least squares model can produce
much better depth estimation than state-of-the-art methods in terms of average
disparity error rate, occlusion boundary precision-recall rate, and the
preservation of intricate visual features
Holographic particle localization under multiple scattering
We introduce a novel framework that incorporates multiple scattering for
large-scale 3D particle-localization using single-shot in-line holography.
Traditional holographic techniques rely on single-scattering models which
become inaccurate under high particle-density. We demonstrate that by
exploiting multiple-scattering, localization is significantly improved. Both
forward and back-scattering are computed by our method under a tractable
recursive framework, in which each recursion estimates the next higher-order
field within the volume. The inverse scattering is presented as a nonlinear
optimization that promotes sparsity, and can be implemented efficiently. We
experimentally reconstruct 100 million object voxels from a single 1-megapixel
hologram. Our work promises utilization of multiple scattering for versatile
large-scale applications
4D Temporally Coherent Light-field Video
Light-field video has recently been used in virtual and augmented reality
applications to increase realism and immersion. However, existing light-field
methods are generally limited to static scenes due to the requirement to
acquire a dense scene representation. The large amount of data and the absence
of methods to infer temporal coherence pose major challenges in storage,
compression and editing compared to conventional video. In this paper, we
propose the first method to extract a spatio-temporally coherent light-field
video representation. A novel method to obtain Epipolar Plane Images (EPIs)
from a spare light-field camera array is proposed. EPIs are used to constrain
scene flow estimation to obtain 4D temporally coherent representations of
dynamic light-fields. Temporal coherence is achieved on a variety of
light-field datasets. Evaluation of the proposed light-field scene flow against
existing multi-view dense correspondence approaches demonstrates a significant
improvement in accuracy of temporal coherence.Comment: Published in 3D Vision (3DV) 201
Playing for Data: Ground Truth from Computer Games
Recent progress in computer vision has been driven by high-capacity models
trained on large datasets. Unfortunately, creating large datasets with
pixel-level labels has been extremely costly due to the amount of human effort
required. In this paper, we present an approach to rapidly creating
pixel-accurate semantic label maps for images extracted from modern computer
games. Although the source code and the internal operation of commercial games
are inaccessible, we show that associations between image patches can be
reconstructed from the communication between the game and the graphics
hardware. This enables rapid propagation of semantic labels within and across
images synthesized by the game, with no access to the source code or the
content. We validate the presented approach by producing dense pixel-level
semantic annotations for 25 thousand images synthesized by a photorealistic
open-world computer game. Experiments on semantic segmentation datasets show
that using the acquired data to supplement real-world images significantly
increases accuracy and that the acquired data enables reducing the amount of
hand-labeled real-world data: models trained with game data and just 1/3 of the
CamVid training set outperform models trained on the complete CamVid training
set.Comment: Accepted to the 14th European Conference on Computer Vision (ECCV
2016
Light field image processing: an overview
Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data
- …