8,161 research outputs found
Detail-preserving and Content-aware Variational Multi-view Stereo Reconstruction
Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view
images is a fundamental yet active research area in computer vision. Despite
the steady progress in multi-view stereo reconstruction, most existing methods
are still limited in recovering fine-scale details and sharp features while
suppressing noises, and may fail in reconstructing regions with few textures.
To address these limitations, this paper presents a Detail-preserving and
Content-aware Variational (DCV) multi-view stereo method, which reconstructs
the 3D surface by alternating between reprojection error minimization and mesh
denoising. In reprojection error minimization, we propose a novel inter-image
similarity measure, which is effective to preserve fine-scale details of the
reconstructed surface and builds a connection between guided image filtering
and image registration. In mesh denoising, we propose a content-aware
-minimization algorithm by adaptively estimating the value and
regularization parameters based on the current input. It is much more promising
in suppressing noise while preserving sharp features than conventional
isotropic mesh smoothing. Experimental results on benchmark datasets
demonstrate that our DCV method is capable of recovering more surface details,
and obtains cleaner and more accurate reconstructions than state-of-the-art
methods. In particular, our method achieves the best results among all
published methods on the Middlebury dino ring and dino sparse ring datasets in
terms of both completeness and accuracy.Comment: 14 pages,16 figures. Submitted to IEEE Transaction on image
processin
DCTM: Discrete-Continuous Transformation Matching for Semantic Flow
Techniques for dense semantic correspondence have provided limited ability to
deal with the geometric variations that commonly exist between semantically
similar images. While variations due to scale and rotation have been examined,
there lack practical solutions for more complex deformations such as affine
transformations because of the tremendous size of the associated solution
space. To address this problem, we present a discrete-continuous transformation
matching (DCTM) framework where dense affine transformation fields are inferred
through a discrete label optimization in which the labels are iteratively
updated via continuous regularization. In this way, our approach draws
solutions from the continuous space of affine transformations in a manner that
can be computed efficiently through constant-time edge-aware filtering and a
proposed affine-varying CNN-based descriptor. Experimental results show that
this model outperforms the state-of-the-art methods for dense semantic
correspondence on various benchmarks
Semantically Guided Depth Upsampling
We present a novel method for accurate and efficient up- sampling of sparse
depth data, guided by high-resolution imagery. Our approach goes beyond the use
of intensity cues only and additionally exploits object boundary cues through
structured edge detection and semantic scene labeling for guidance. Both cues
are combined within a geodesic distance measure that allows for
boundary-preserving depth in- terpolation while utilizing local context. We
model the observed scene structure by locally planar elements and formulate the
upsampling task as a global energy minimization problem. Our method determines
glob- ally consistent solutions and preserves fine details and sharp depth
bound- aries. In our experiments on several public datasets at different levels
of application, we demonstrate superior performance of our approach over the
state-of-the-art, even for very sparse measurements.Comment: German Conference on Pattern Recognition 2016 (Oral
- …