5,122 research outputs found
Convolutional Neural Network Architectures for Signals Supported on Graphs
Two architectures that generalize convolutional neural networks (CNNs) for
the processing of signals supported on graphs are introduced. We start with the
selection graph neural network (GNN), which replaces linear time invariant
filters with linear shift invariant graph filters to generate convolutional
features and reinterprets pooling as a possibly nonlinear subsampling stage
where nearby nodes pool their information in a set of preselected sample nodes.
A key component of the architecture is to remember the position of sampled
nodes to permit computation of convolutional features at deeper layers. The
second architecture, dubbed aggregation GNN, diffuses the signal through the
graph and stores the sequence of diffused components observed by a designated
node. This procedure effectively aggregates all components into a stream of
information having temporal structure to which the convolution and pooling
stages of regular CNNs can be applied. A multinode version of aggregation GNNs
is further introduced for operation in large scale graphs. An important
property of selection and aggregation GNNs is that they reduce to conventional
CNNs when particularized to time signals reinterpreted as graph signals in a
circulant graph. Comparative numerical analyses are performed in a source
localization application over synthetic and real-world networks. Performance is
also evaluated for an authorship attribution problem and text category
classification. Multinode aggregation GNNs are consistently the best performing
GNN architecture.Comment: Submitted to IEEE Transactions on Signal Processin
Convolutional Neural Networks Via Node-Varying Graph Filters
Convolutional neural networks (CNNs) are being applied to an increasing
number of problems and fields due to their superior performance in
classification and regression tasks. Since two of the key operations that CNNs
implement are convolution and pooling, this type of networks is implicitly
designed to act on data described by regular structures such as images.
Motivated by the recent interest in processing signals defined in irregular
domains, we advocate a CNN architecture that operates on signals supported on
graphs. The proposed design replaces the classical convolution not with a
node-invariant graph filter (GF), which is the natural generalization of
convolution to graph domains, but with a node-varying GF. This filter extracts
different local features without increasing the output dimension of each layer
and, as a result, bypasses the need for a pooling stage while involving only
local operations. A second contribution is to replace the node-varying GF with
a hybrid node-varying GF, which is a new type of GF introduced in this paper.
While the alternative architecture can still be run locally without requiring a
pooling stage, the number of trainable parameters is smaller and can be
rendered independent of the data dimension. Tests are run on a synthetic source
localization problem and on the 20NEWS dataset.Comment: Submitted to DSW 2018 (IEEE Data Science Workshop
Geometric deep learning
The goal of these course notes is to describe the main mathematical ideas behind geometric deep learning and to provide implementation details for several applications in shape analysis and synthesis, computer vision and computer graphics. The text in the course materials is primarily based on previously published work. With these notes we gather and provide a clear picture of the key concepts and techniques that fall under the umbrella of geometric deep learning, and illustrate the applications they enable. We also aim to provide practical implementation details for the methods presented in these works, as well as suggest further readings and extensions of these ideas
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
- …