19,783 research outputs found
Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds
The Euclidean scattering transform was introduced nearly a decade ago to
improve the mathematical understanding of convolutional neural networks.
Inspired by recent interest in geometric deep learning, which aims to
generalize convolutional neural networks to manifold and graph-structured
domains, we define a geometric scattering transform on manifolds. Similar to
the Euclidean scattering transform, the geometric scattering transform is based
on a cascade of wavelet filters and pointwise nonlinearities. It is invariant
to local isometries and stable to certain types of diffeomorphisms. Empirical
results demonstrate its utility on several geometric learning tasks. Our
results generalize the deformation stability and local translation invariance
of Euclidean scattering, and demonstrate the importance of linking the used
filter structures to the underlying geometry of the data.Comment: 35 pages; 3 figures; 2 tables; v3: Revisions based on reviewer
comment
Geometric Graph Filters and Neural Networks: Limit Properties and Discriminability Trade-offs
This paper studies the relationship between a graph neural network (GNN) and
a manifold neural network (MNN) when the graph is constructed from a set of
points sampled from the manifold, thus encoding geometric information. We
consider convolutional MNNs and GNNs where the manifold and the graph
convolutions are respectively defined in terms of the Laplace-Beltrami operator
and the graph Laplacian. Using the appropriate kernels, we analyze both dense
and moderately sparse graphs. We prove non-asymptotic error bounds showing that
convolutional filters and neural networks on these graphs converge to
convolutional filters and neural networks on the continuous manifold. As a
byproduct of this analysis, we observe an important trade-off between the
discriminability of graph filters and their ability to approximate the desired
behavior of manifold filters. We then discuss how this trade-off is ameliorated
in neural networks due to the frequency mixing property of nonlinearities. We
further derive a transferability corollary for geometric graphs sampled from
the same manifold. We validate our results numerically on a navigation control
problem and a point cloud classification task.Comment: 16 pages, 6 figures, 3 table
SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels
We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant
of deep neural networks for irregular structured and geometric input, e.g.,
graphs or meshes. Our main contribution is a novel convolution operator based
on B-splines, that makes the computation time independent from the kernel size
due to the local support property of the B-spline basis functions. As a result,
we obtain a generalization of the traditional CNN convolution operator by using
continuous kernel functions parametrized by a fixed number of trainable
weights. In contrast to related approaches that filter in the spectral domain,
the proposed method aggregates features purely in the spatial domain. In
addition, SplineCNN allows entire end-to-end training of deep architectures,
using only the geometric structure as input, instead of handcrafted feature
descriptors. For validation, we apply our method on tasks from the fields of
image graph classification, shape correspondence and graph node classification,
and show that it outperforms or pars state-of-the-art approaches while being
significantly faster and having favorable properties like domain-independence.Comment: Presented at CVPR 201
Stability and Generalization Capabilities of Message Passing Graph Neural Networks
Message passing neural networks (MPNN) have seen a steep rise in popularity
since their introduction as generalizations of convolutional neural networks to
graph structured data, and are now considered state-of-the-art tools for
solving a large variety of graph-focused problems. We study the generalization
capabilities of MPNNs in graph classification. We assume that graphs of
different classes are sampled from different random graph models. Based on this
data distribution, we derive a non-asymptotic bound on the generalization gap
between the empirical and statistical loss, that decreases to zero as the
graphs become larger. This is proven by showing that a MPNN, applied on a
graph, approximates the MPNN applied on the geometric model that the graph
discretizes.Comment: 44 pages, typos corrected, preprin
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
- …