8 research outputs found
Scalable Surface Reconstruction from Point Clouds with Extreme Scale and Density Diversity
In this paper we present a scalable approach for robustly computing a 3D
surface mesh from multi-scale multi-view stereo point clouds that can handle
extreme jumps of point density (in our experiments three orders of magnitude).
The backbone of our approach is a combination of octree data partitioning,
local Delaunay tetrahedralization and graph cut optimization. Graph cut
optimization is used twice, once to extract surface hypotheses from local
Delaunay tetrahedralizations and once to merge overlapping surface hypotheses
even when the local tetrahedralizations do not share the same topology.This
formulation allows us to obtain a constant memory consumption per sub-problem
while at the same time retaining the density independent interpolation
properties of the Delaunay-based optimization. On multiple public datasets, we
demonstrate that our approach is highly competitive with the state-of-the-art
in terms of accuracy, completeness and outlier resilience. Further, we
demonstrate the multi-scale potential of our approach by processing a newly
recorded dataset with 2 billion points and a point density variation of more
than four orders of magnitude - requiring less than 9GB of RAM per process.Comment: This paper was accepted to the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017. The copyright was transfered to IEEE
(ieee.org). The official version of the paper will be made available on IEEE
Xplore (R) (ieeexplore.ieee.org). This version of the paper also contains the
supplementary material, which will not appear IEEE Xplore (R
Polarimetric PatchMatch Multi-View Stereo
PatchMatch Multi-View Stereo (PatchMatch MVS) is one of the popular MVS
approaches, owing to its balanced accuracy and efficiency. In this paper, we
propose Polarimetric PatchMatch multi-view Stereo (PolarPMS), which is the
first method exploiting polarization cues to PatchMatch MVS. The key of
PatchMatch MVS is to generate depth and normal hypotheses, which form local 3D
planes and slanted stereo matching windows, and efficiently search for the best
hypothesis based on the consistency among multi-view images. In addition to
standard photometric consistency, our PolarPMS evaluates polarimetric
consistency to assess the validness of a depth and normal hypothesis, motivated
by the physical property that the polarimetric information is related to the
object's surface normal. Experimental results demonstrate that our PolarPMS can
improve the accuracy and the completeness of reconstructed 3D models,
especially for texture-less surfaces, compared with state-of-the-art PatchMatch
MVS methods
Planar Prior Assisted PatchMatch Multi-View Stereo
The completeness of 3D models is still a challenging problem in multi-view
stereo (MVS) due to the unreliable photometric consistency in low-textured
areas. Since low-textured areas usually exhibit strong planarity, planar models
are advantageous to the depth estimation of low-textured areas. On the other
hand, PatchMatch multi-view stereo is very efficient for its sampling and
propagation scheme. By taking advantage of planar models and PatchMatch
multi-view stereo, we propose a planar prior assisted PatchMatch multi-view
stereo framework in this paper. In detail, we utilize a probabilistic graphical
model to embed planar models into PatchMatch multi-view stereo and contribute a
novel multi-view aggregated matching cost. This novel cost takes both
photometric consistency and planar compatibility into consideration, making it
suited for the depth estimation of both non-planar and planar regions.
Experimental results demonstrate that our method can efficiently recover the
depth information of extremely low-textured areas, thus obtaining high complete
3D models and achieving state-of-the-art performance.Comment: Accepted by AAAI-202
Fast and Accurate Depth Estimation from Sparse Light Fields
We present a fast and accurate method for dense depth reconstruction from
sparsely sampled light fields obtained using a synchronized camera array. In
our method, the source images are over-segmented into non-overlapping compact
superpixels that are used as basic data units for depth estimation and
refinement. Superpixel representation provides a desirable reduction in the
computational cost while preserving the image geometry with respect to the
object contours. Each superpixel is modeled as a plane in the image space,
allowing depth values to vary smoothly within the superpixel area. Initial
depth maps, which are obtained by plane sweeping, are iteratively refined by
propagating good correspondences within an image. To ensure the fast
convergence of the iterative optimization process, we employ a highly parallel
propagation scheme that operates on all the superpixels of all the images at
once, making full use of the parallel graphics hardware. A few optimization
iterations of the energy function incorporating superpixel-wise smoothness and
geometric consistency constraints allows to recover depth with high accuracy in
textured and textureless regions as well as areas with occlusions, producing
dense globally consistent depth maps. We demonstrate that while the depth
reconstruction takes about a second per full high-definition view, the accuracy
of the obtained depth maps is comparable with the state-of-the-art results.Comment: 15 pages, 15 figure
Recommended from our members
A Knowledge Integration Framework for 3D Shape Reconstruction
The modern emergence of automation in many industries has given impetus to extensive research into mobile robotics. Novel perception technologies now enable cars to drive autonomously, tractors to till a field automatically and underwater robots to construct pipelines. An essential requirement to facilitate both perception and autonomous navigation is the analysis of the 3D environment using sensors like laser scanners or stereo cameras. 3D sensors generate a very large number of 3D data points in sampling object shapes within an environment, but crucially do not provide any intrinsic information about the environment in which the robots operate with. This means unstructured 3D samples must be processed by application-specific models to enable a robot, for instance, to detect and identify objects and infer the scene geometry for path-planning more efficiently than by using raw 3D data. This thesis specifically focuses on the fundamental task of 3D shape reconstruction and modelling by presenting a new knowledge integration framework for unstructured 3D samples. The novelty lies in the representation of surfaces by algebraic functions with limited support, which enables the extraction of smooth consistent shapes from noisy samples with a heterogeneous density. Moreover, many surfaces in urban environments can reasonably be assumed to be planar, and the framework exploits this knowledge to enable effective noise suppression without loss of detail. This is achieved by using a convex optimization technique which has linear computational complexity. Thus is much more efficient than existing solutions. The new framework has been validated by critical experimental analysis and evaluation and has been shown to increase the accuracy of the reconstructed shape significantly compared to state-of-the-art methods. Applying this new knowledge integration framework means that less accurate, low-cost 3D sensors can be employed without sacrificing the high demands that 3D perception must achieve. This links well into the area of robotic inspection, as for example regarding small drones that use inaccurate and lightweight image sensors