6,651 research outputs found
SHREC'16: partial matching of deformable shapes
Matching deformable 3D shapes under partiality transformations is a challenging problem that has received limited focus in the computer vision and graphics communities. With this benchmark, we explore and thoroughly investigate the robustness of existing matching methods in this challenging task. Participants are asked to provide a point-to-point correspondence (either sparse or dense) between deformable shapes undergoing different kinds of partiality transformations, resulting in a total of 400 matching problems to be solved for each method - making this benchmark the biggest and most challenging of its kind. Five matching algorithms were evaluated in the contest; this paper presents the details of the dataset, the adopted evaluation measures, and shows thorough comparisons among all competing methods
Deformable Prototypes for Encoding Shape Categories in Image Databases
We describe a method for shape-based image database search that uses deformable prototypes to represent categories. Rather than directly comparing a candidate shape with all shape entries in the database, shapes are compared in terms of the types of nonrigid deformations (differences) that relate them to a small subset of representative prototypes. To solve the shape correspondence and alignment problem, we employ the technique of modal matching, an information-preserving shape decomposition for matching, describing, and comparing shapes despite sensor variations and nonrigid deformations. In modal matching, shape is decomposed into an ordered basis of orthogonal principal components. We demonstrate the utility of this approach for shape comparison in 2-D image databases.Office of Naval Research (Young Investigator Award N00014-06-1-0661
Regularized pointwise map recovery from functional correspondence
The concept of using functional maps for representing dense correspondences between deformable shapes has proven to be extremely effective in many applications. However, despite the impact of this framework, the problem of recovering the point-to-point correspondence from a given functional map has received surprisingly little interest. In this paper, we analyse the aforementioned problem and propose a novel method for reconstructing pointwise correspondences from a given functional map. The proposed algorithm phrases the matching problem as a regularized alignment problem of the spectral embeddings of the two shapes. Opposed to established methods, our approach does not require the input shapes to be nearly-isometric, and easily extends to recovering the point-to-point correspondence in part-to-whole shape matching problems. Our numerical experiments demonstrate that the proposed approach leads to a significant improvement in accuracy in several challenging cases
Deep Functional Maps: Structured Prediction for Dense Shape Correspondence
We introduce a new framework for learning dense correspondence between
deformable 3D shapes. Existing learning based approaches model shape
correspondence as a labelling problem, where each point of a query shape
receives a label identifying a point on some reference domain; the
correspondence is then constructed a posteriori by composing the label
predictions of two input shapes. We propose a paradigm shift and design a
structured prediction model in the space of functional maps, linear operators
that provide a compact representation of the correspondence. We model the
learning process via a deep residual network which takes dense descriptor
fields defined on two shapes as input, and outputs a soft map between the two
given objects. The resulting correspondence is shown to be accurate on several
challenging benchmarks comprising multiple categories, synthetic models, real
scans with acquisition artifacts, topological noise, and partiality.Comment: Accepted for publication at ICCV 201
Deformable Shape Completion with Graph Convolutional Autoencoders
The availability of affordable and portable depth sensors has made scanning
objects and people simpler than ever. However, dealing with occlusions and
missing parts is still a significant challenge. The problem of reconstructing a
(possibly non-rigidly moving) 3D object from a single or multiple partial scans
has received increasing attention in recent years. In this work, we propose a
novel learning-based method for the completion of partial shapes. Unlike the
majority of existing approaches, our method focuses on objects that can undergo
non-rigid deformations. The core of our method is a variational autoencoder
with graph convolutional operations that learns a latent space for complete
realistic shapes. At inference, we optimize to find the representation in this
latent space that best fits the generated shape to the known partial input. The
completed shape exhibits a realistic appearance on the unknown part. We show
promising results towards the completion of synthetic and real scans of human
body and face meshes exhibiting different styles of articulation and
partiality.Comment: CVPR 201
- …