146 research outputs found
Multi-directional Geodesic Neural Networks via Equivariant Convolution
We propose a novel approach for performing convolution of signals on curved
surfaces and show its utility in a variety of geometric deep learning
applications. Key to our construction is the notion of directional functions
defined on the surface, which extend the classic real-valued signals and which
can be naturally convolved with with real-valued template functions. As a
result, rather than trying to fix a canonical orientation or only keeping the
maximal response across all alignments of a 2D template at every point of the
surface, as done in previous works, we show how information across all
rotations can be kept across different layers of the neural network. Our
construction, which we call multi-directional geodesic convolution, or
directional convolution for short, allows, in particular, to propagate and
relate directional information across layers and thus different regions on the
shape. We first define directional convolution in the continuous setting, prove
its key properties and then show how it can be implemented in practice, for
shapes represented as triangle meshes. We evaluate directional convolution in a
wide variety of learning scenarios ranging from classification of signals on
surfaces, to shape segmentation and shape matching, where we show a significant
improvement over several baselines
Smoothed Graph Contrastive Learning via Seamless Proximity Integration
Graph contrastive learning (GCL) aligns node representations by classifying
node pairs into positives and negatives using a selection process that
typically relies on establishing correspondences within two augmented graphs.
The conventional GCL approaches incorporate negative samples uniformly in the
contrastive loss, resulting in the equal treatment negative nodes, regardless
of their proximity to the true positive. In this paper, we present a Smoothed
Graph Contrastive Learning model (SGCL), which leverages the geometric
structure of augmented graphs to inject proximity information associated with
positive/negative pairs in the contrastive loss, thus significantly
regularizing the learning process. The proposed SGCL adjusts the penalties
associated with node pairs in the contrastive loss by incorporating three
distinct smoothing techniques that result in proximity aware positives and
negatives. To enhance scalability for large-scale graphs, the proposed
framework incorporates a graph batch-generating strategy that partitions the
given graphs into multiple subgraphs, facilitating efficient training in
separate batches. Through extensive experimentation in the unsupervised setting
on various benchmarks, particularly those of large scale, we demonstrate the
superiority of our proposed framework against recent baselines.Comment: 17 page
NCP: Neural Correspondence Prior for Effective Unsupervised Shape Matching
We present Neural Correspondence Prior (NCP), a new paradigm for computing
correspondences between 3D shapes. Our approach is fully unsupervised and can
lead to high-quality correspondences even in challenging cases such as sparse
point clouds or non-isometric meshes, where current methods fail. Our first key
observation is that, in line with neural priors observed in other domains,
recent network architectures on 3D data, even without training, tend to produce
pointwise features that induce plausible maps between rigid or non-rigid
shapes. Secondly, we show that given a noisy map as input, training a feature
extraction network with the input map as supervision tends to remove artifacts
from the input and can act as a powerful correspondence denoising mechanism,
both between individual pairs and within a collection. With these observations
in hand, we propose a two-stage unsupervised paradigm for shape matching by (i)
performing unsupervised training by adapting an existing approach to obtain an
initial set of noisy matches, and (ii) using these matches to train a network
in a supervised manner. We demonstrate that this approach significantly
improves the accuracy of the maps, especially when trained within a collection.
We show that NCP is data-efficient, fast, and achieves state-of-the-art results
on many tasks. Our code can be found online: https://github.com/pvnieo/NCP.Comment: NeurIPS 2022, 10 pages, 9 figure
Understanding and Improving Features Learned in Deep Functional Maps
Deep functional maps have recently emerged as a successful paradigm for
non-rigid 3D shape correspondence tasks. An essential step in this pipeline
consists in learning feature functions that are used as constraints to solve
for a functional map inside the network. However, the precise nature of the
information learned and stored in these functions is not yet well understood.
Specifically, a major question is whether these features can be used for any
other objective, apart from their purely algebraic role in solving for
functional map matrices. In this paper, we show that under some mild
conditions, the features learned within deep functional map approaches can be
used as point-wise descriptors and thus are directly comparable across
different shapes, even without the necessity of solving for a functional map at
test time. Furthermore, informed by our analysis, we propose effective
modifications to the standard deep functional map pipeline, which promote
structural properties of learned features, significantly improving the matching
results. Finally, we demonstrate that previously unsuccessful attempts at using
extrinsic architectures for deep functional map feature extraction can be
remedied via simple architectural changes, which encourage the theoretical
properties suggested by our analysis. We thus bridge the gap between intrinsic
and extrinsic surface-based learning, suggesting the necessary and sufficient
conditions for successful shape matching. Our code is available at
https://github.com/pvnieo/clover.Comment: 16 pages, 8 figures, 8 tables, to be published in 2023 The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR
Shape Non-rigid Kinematics (SNK): A Zero-Shot Method for Non-Rigid Shape Matching via Unsupervised Functional Map Regularized Reconstruction
We present Shape Non-rigid Kinematics (SNK), a novel zero-shot method for
non-rigid shape matching that eliminates the need for extensive training or
ground truth data. SNK operates on a single pair of shapes, and employs a
reconstruction-based strategy using an encoder-decoder architecture, which
deforms the source shape to closely match the target shape. During the process,
an unsupervised functional map is predicted and converted into a point-to-point
map, serving as a supervisory mechanism for the reconstruction. To aid in
training, we have designed a new decoder architecture that generates smooth,
realistic deformations. SNK demonstrates competitive results on traditional
benchmarks, simplifying the shape-matching process without compromising
accuracy. Our code can be found online: https://github.com/pvnieo/SNKComment: NeurIPS 2023, 10 pages, 9 figure
Joint Symmetry Detection and Shape Matching for Non-Rigid Point Cloud
Despite the success of deep functional maps in non-rigid 3D shape matching,
there exists no learning framework that models both self-symmetry and shape
matching simultaneously. This is despite the fact that errors due to symmetry
mismatch are a major challenge in non-rigid shape matching. In this paper, we
propose a novel framework that simultaneously learns both self symmetry as well
as a pairwise map between a pair of shapes. Our key idea is to couple a self
symmetry map and a pairwise map through a regularization term that provides a
joint constraint on both of them, thereby, leading to more accurate maps. We
validate our method on several benchmarks where it outperforms many competitive
baselines on both tasks.Comment: Under Review. arXiv admin note: substantial text overlap with
arXiv:2110.0299
Generalizable Local Feature Pre-training for Deformable Shape Analysis
Transfer learning is fundamental for addressing problems in settings with
little training data. While several transfer learning approaches have been
proposed in 3D, unfortunately, these solutions typically operate on an entire
3D object or even scene-level and thus, as we show, fail to generalize to new
classes, such as deformable organic shapes. In addition, there is currently a
lack of understanding of what makes pre-trained features transferable across
significantly different 3D shape categories. In this paper, we make a step
toward addressing these challenges. First, we analyze the link between feature
locality and transferability in tasks involving deformable 3D objects, while
also comparing different backbones and losses for local feature pre-training.
We observe that with proper training, learned features can be useful in such
tasks, but, crucially, only with an appropriate choice of the receptive field
size. We then propose a differentiable method for optimizing the receptive
field within 3D transfer learning. Jointly, this leads to the first learnable
features that can successfully generalize to unseen classes of 3D shapes such
as humans and animals. Our extensive experiments show that this approach leads
to state-of-the-art results on several downstream tasks such as segmentation,
shape correspondence, and classification. Our code is available at
\url{https://github.com/pvnieo/vader}.Comment: 16 pages, 14 figures, 7 tables, to be published in The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR
AtomSurf : Surface Representation for Learning on Protein Structures
Recent advancements in Cryo-EM and protein structure prediction algorithms
have made large-scale protein structures accessible, paving the way for machine
learning-based functional annotations.The field of geometric deep learning
focuses on creating methods working on geometric data. An essential aspect of
learning from protein structures is representing these structures as a
geometric object (be it a grid, graph, or surface) and applying a learning
method tailored to this representation. The performance of a given approach
will then depend on both the representation and its corresponding learning
method.
In this paper, we investigate representing proteins as and incorporate them into an established representation benchmark.
Our first finding is that despite promising preliminary results, the surface
representation alone does not seem competitive with 3D grids. Building on this,
we introduce a synergistic approach, combining surface representations with
graph-based methods, resulting in a general framework that incorporates both
representations in learning. We show that using this combination, we are able
to obtain state-of-the-art results across . Our code
and data can be found online: https://github.com/Vincentx15/atom2D .Comment: 10 page
SRFeat: Learning Locally Accurate and Globally Consistent Non-Rigid Shape Correspondence
In this work, we present a novel learning-based framework that combines the
local accuracy of contrastive learning with the global consistency of geometric
approaches, for robust non-rigid matching. We first observe that while
contrastive learning can lead to powerful point-wise features, the learned
correspondences commonly lack smoothness and consistency, owing to the purely
combinatorial nature of the standard contrastive losses. To overcome this
limitation we propose to boost contrastive feature learning with two types of
smoothness regularization that inject geometric information into correspondence
learning. With this novel combination in hand, the resulting features are both
highly discriminative across individual points, and, at the same time, lead to
robust and consistent correspondences, through simple proximity queries. Our
framework is general and is applicable to local feature learning in both the 3D
and 2D domains. We demonstrate the superiority of our approach through
extensive experiments on a wide range of challenging matching benchmarks,
including 3D non-rigid shape correspondence and 2D image keypoint matching.Comment: 3DV 2022. Code and data: https://github.com/craigleili/SRFea
- …