288 research outputs found
Efficient Deformable Shape Correspondence via Kernel Matching
We present a method to match three dimensional shapes under non-isometric
deformations, topology changes and partiality. We formulate the problem as
matching between a set of pair-wise and point-wise descriptors, imposing a
continuity prior on the mapping, and propose a projected descent optimization
procedure inspired by difference of convex functions (DC) programming.
Surprisingly, in spite of the highly non-convex nature of the resulting
quadratic assignment problem, our method converges to a semantically meaningful
and continuous mapping in most of our experiments, and scales well. We provide
preliminary theoretical analysis and several interpretations of the method.Comment: Accepted for oral presentation at 3DV 2017, including supplementary
materia
Spectral Generalized Multi-Dimensional Scaling
Multidimensional scaling (MDS) is a family of methods that embed a given set
of points into a simple, usually flat, domain. The points are assumed to be
sampled from some metric space, and the mapping attempts to preserve the
distances between each pair of points in the set. Distances in the target space
can be computed analytically in this setting. Generalized MDS is an extension
that allows mapping one metric space into another, that is, multidimensional
scaling into target spaces in which distances are evaluated numerically rather
than analytically. Here, we propose an efficient approach for computing such
mappings between surfaces based on their natural spectral decomposition, where
the surfaces are treated as sampled metric-spaces. The resulting spectral-GMDS
procedure enables efficient embedding by implicitly incorporating smoothness of
the mapping into the problem, thereby substantially reducing the complexity
involved in its solution while practically overcoming its non-convex nature.
The method is compared to existing techniques that compute dense correspondence
between shapes. Numerical experiments of the proposed method demonstrate its
efficiency and accuracy compared to state-of-the-art approaches
Learning shape correspondence with anisotropic convolutional neural networks
Establishing correspondence between shapes is a fundamental problem in
geometry processing, arising in a wide variety of applications. The problem is
especially difficult in the setting of non-isometric deformations, as well as
in the presence of topological noise and missing parts, mainly due to the
limited capability to model such deformations axiomatically. Several recent
works showed that invariance to complex shape transformations can be learned
from examples. In this paper, we introduce an intrinsic convolutional neural
network architecture based on anisotropic diffusion kernels, which we term
Anisotropic Convolutional Neural Network (ACNN). In our construction, we
generalize convolutions to non-Euclidean domains by constructing a set of
oriented anisotropic diffusion kernels, creating in this way a local intrinsic
polar representation of the data (`patch'), which is then correlated with a
filter. Several cascades of such filters, linear, and non-linear operators are
stacked to form a deep neural network whose parameters are learned by
minimizing a task-specific cost. We use ACNNs to effectively learn intrinsic
dense correspondences between deformable shapes in very challenging settings,
achieving state-of-the-art results on some of the most difficult recent
correspondence benchmarks
Neural Semantic Surface Maps
We present an automated technique for computing a map between two genus-zero
shapes, which matches semantically corresponding regions to one another. Lack
of annotated data prohibits direct inference of 3D semantic priors; instead,
current State-of-the-art methods predominantly optimize geometric properties or
require varying amounts of manual annotation. To overcome the lack of annotated
training data, we distill semantic matches from pre-trained vision models: our
method renders the pair of 3D shapes from multiple viewpoints; the resulting
renders are then fed into an off-the-shelf image-matching method which
leverages a pretrained visual model to produce feature points. This yields
semantic correspondences, which can be projected back to the 3D shapes,
producing a raw matching that is inaccurate and inconsistent between different
viewpoints. These correspondences are refined and distilled into an
inter-surface map by a dedicated optimization scheme, which promotes
bijectivity and continuity of the output map. We illustrate that our approach
can generate semantic surface-to-surface maps, eliminating manual annotations
or any 3D training data requirement. Furthermore, it proves effective in
scenarios with high semantic complexity, where objects are non-isometrically
related, as well as in situations where they are nearly isometric
- …