14,192 research outputs found
SurfNet: Generating 3D shape surfaces using deep residual networks
3D shape models are naturally parameterized using vertices and faces, \ie,
composed of polygons forming a surface. However, current 3D learning paradigms
for predictive and generative tasks using convolutional neural networks focus
on a voxelized representation of the object. Lifting convolution operators from
the traditional 2D to 3D results in high computational overhead with little
additional benefit as most of the geometry information is contained on the
surface boundary. Here we study the problem of directly generating the 3D shape
surface of rigid and non-rigid shapes using deep convolutional neural networks.
We develop a procedure to create consistent `geometry images' representing the
shape surface of a category of 3D objects. We then use this consistent
representation for category-specific shape surface generation from a parametric
representation or an image by developing novel extensions of deep residual
networks for the task of geometry image generation. Our experiments indicate
that our network learns a meaningful representation of shape surfaces allowing
it to interpolate between shape orientations and poses, invent new shape
surfaces and reconstruct 3D shape surfaces from previously unseen images.Comment: CVPR 2017 pape
A Combinatorial Solution to Non-Rigid 3D Shape-to-Image Matching
We propose a combinatorial solution for the problem of non-rigidly matching a
3D shape to 3D image data. To this end, we model the shape as a triangular mesh
and allow each triangle of this mesh to be rigidly transformed to achieve a
suitable matching to the image. By penalising the distance and the relative
rotation between neighbouring triangles our matching compromises between image
and shape information. In this paper, we resolve two major challenges: Firstly,
we address the resulting large and NP-hard combinatorial problem with a
suitable graph-theoretic approach. Secondly, we propose an efficient
discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge
this is the first combinatorial formulation for non-rigid 3D shape-to-image
matching. In contrast to existing local (gradient descent) optimisation
methods, we obtain solutions that do not require a good initialisation and that
are within a bound of the optimal solution. We evaluate the proposed method on
the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image
registration and demonstrate that it provides promising results.Comment: 10 pages, 7 figure
Temporally coherent 4D reconstruction of complex dynamic scenes
This paper presents an approach for reconstruction of 4D temporally coherent
models of complex dynamic scenes. No prior knowledge is required of scene
structure or camera calibration allowing reconstruction from multiple moving
cameras. Sparse-to-dense temporal correspondence is integrated with joint
multi-view segmentation and reconstruction to obtain a complete 4D
representation of static and dynamic objects. Temporal coherence is exploited
to overcome visual ambiguities resulting in improved reconstruction of complex
scenes. Robust joint segmentation and reconstruction of dynamic objects is
achieved by introducing a geodesic star convexity constraint. Comparative
evaluation is performed on a variety of unstructured indoor and outdoor dynamic
scenes with hand-held cameras and multiple people. This demonstrates
reconstruction of complete temporally coherent 4D scene models with improved
nonrigid object segmentation and shape reconstruction.Comment: To appear in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 . Video available at:
https://www.youtube.com/watch?v=bm_P13_-Ds
Combinatorial Continuous Maximal Flows
Maximum flow (and minimum cut) algorithms have had a strong impact on
computer vision. In particular, graph cuts algorithms provide a mechanism for
the discrete optimization of an energy functional which has been used in a
variety of applications such as image segmentation, stereo, image stitching and
texture synthesis. Algorithms based on the classical formulation of max-flow
defined on a graph are known to exhibit metrication artefacts in the solution.
Therefore, a recent trend has been to instead employ a spatially continuous
maximum flow (or the dual min-cut problem) in these same applications to
produce solutions with no metrication errors. However, known fast continuous
max-flow algorithms have no stopping criteria or have not been proved to
converge. In this work, we revisit the continuous max-flow problem and show
that the analogous discrete formulation is different from the classical
max-flow problem. We then apply an appropriate combinatorial optimization
technique to this combinatorial continuous max-flow CCMF problem to find a
null-divergence solution that exhibits no metrication artefacts and may be
solved exactly by a fast, efficient algorithm with provable convergence.
Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the
fact, already proved by Nozawa in the continuous setting, that the max-flow and
the total variation problems are not always equivalent.Comment: 26 page
General Dynamic Scene Reconstruction from Multiple View Video
This paper introduces a general approach to dynamic scene reconstruction from
multiple moving cameras without prior knowledge or limiting constraints on the
scene structure, appearance, or illumination. Existing techniques for dynamic
scene reconstruction from multiple wide-baseline camera views primarily focus
on accurate reconstruction in controlled environments, where the cameras are
fixed and calibrated and background is known. These approaches are not robust
for general dynamic scenes captured with sparse moving cameras. Previous
approaches for outdoor dynamic scene reconstruction assume prior knowledge of
the static background appearance and structure. The primary contributions of
this paper are twofold: an automatic method for initial coarse dynamic scene
segmentation and reconstruction without prior knowledge of background
appearance or structure; and a general robust approach for joint segmentation
refinement and dense reconstruction of dynamic scenes from multiple
wide-baseline static or moving cameras. Evaluation is performed on a variety of
indoor and outdoor scenes with cluttered backgrounds and multiple dynamic
non-rigid objects such as people. Comparison with state-of-the-art approaches
demonstrates improved accuracy in both multiple view segmentation and dense
reconstruction. The proposed approach also eliminates the requirement for prior
knowledge of scene structure and appearance
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
Automatic reconstruction of 3D models from images using multi-view
Structure-from-Motion methods has been one of the most fruitful outcomes of
computer vision. These advances combined with the growing popularity of Micro
Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools
ubiquitous for large number of Architecture, Engineering and Construction
applications among audiences, mostly unskilled in computer vision. However, to
obtain high-resolution and accurate reconstructions from a large-scale object
using SfM, there are many critical constraints on the quality of image data,
which often become sources of inaccuracy as the current 3D reconstruction
pipelines do not facilitate the users to determine the fidelity of input data
during the image acquisition. In this paper, we present and advocate a
closed-loop interactive approach that performs incremental reconstruction in
real-time and gives users an online feedback about the quality parameters like
Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We
also propose a novel multi-scale camera network design to prevent scene drift
caused by incremental map building, and release the first multi-scale image
sequence dataset as a benchmark. Further, we evaluate our system on real
outdoor scenes, and show that our interactive pipeline combined with a
multi-scale camera network approach provides compelling accuracy in multi-view
reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and
Automation (ICRA '15), Seattle, WA, US
- …