806 research outputs found
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Geometric deep learning
The goal of these course notes is to describe the main mathematical ideas behind geometric deep learning and to provide implementation details for several applications in shape analysis and synthesis, computer vision and computer graphics. The text in the course materials is primarily based on previously published work. With these notes we gather and provide a clear picture of the key concepts and techniques that fall under the umbrella of geometric deep learning, and illustrate the applications they enable. We also aim to provide practical implementation details for the methods presented in these works, as well as suggest further readings and extensions of these ideas
SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels
We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant
of deep neural networks for irregular structured and geometric input, e.g.,
graphs or meshes. Our main contribution is a novel convolution operator based
on B-splines, that makes the computation time independent from the kernel size
due to the local support property of the B-spline basis functions. As a result,
we obtain a generalization of the traditional CNN convolution operator by using
continuous kernel functions parametrized by a fixed number of trainable
weights. In contrast to related approaches that filter in the spectral domain,
the proposed method aggregates features purely in the spatial domain. In
addition, SplineCNN allows entire end-to-end training of deep architectures,
using only the geometric structure as input, instead of handcrafted feature
descriptors. For validation, we apply our method on tasks from the fields of
image graph classification, shape correspondence and graph node classification,
and show that it outperforms or pars state-of-the-art approaches while being
significantly faster and having favorable properties like domain-independence.Comment: Presented at CVPR 201
Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs
We propose a simple and generic layer formulation that extends the properties
of convolutional layers to any domain that can be described by a graph. Namely,
we use the support of its adjacency matrix to design learnable weight sharing
filters able to exploit the underlying structure of signals in the same fashion
as for images. The proposed formulation makes it possible to learn the weights
of the filter as well as a scheme that controls how they are shared across the
graph. We perform validation experiments with image datasets and show that
these filters offer performances comparable with convolutional ones.Comment: To appear in 2017, 5th IEEE Global Conference on Signal and
Information Processing, 5 pages, 3 figures, 3 table
CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters
The rise of graph-structured data such as social networks, regulatory
networks, citation graphs, and functional brain networks, in combination with
resounding success of deep learning in various applications, has brought the
interest in generalizing deep learning models to non-Euclidean domains. In this
paper, we introduce a new spectral domain convolutional architecture for deep
learning on graphs. The core ingredient of our model is a new class of
parametric rational complex functions (Cayley polynomials) allowing to
efficiently compute spectral filters on graphs that specialize on frequency
bands of interest. Our model generates rich spectral filters that are localized
in space, scales linearly with the size of the input data for
sparsely-connected graphs, and can handle different constructions of Laplacian
operators. Extensive experimental results show the superior performance of our
approach, in comparison to other spectral domain convolutional architectures,
on spectral image classification, community detection, vertex classification
and matrix completion tasks
- …