3,785 research outputs found
Encoding Robust Representation for Graph Generation
Generative networks have made it possible to generate meaningful signals such
as images and texts from simple noise. Recently, generative methods based on
GAN and VAE were developed for graphs and graph signals. However, the
mathematical properties of these methods are unclear, and training good
generative models is difficult. This work proposes a graph generation model
that uses a recent adaptation of Mallat's scattering transform to graphs. The
proposed model is naturally composed of an encoder and a decoder. The encoder
is a Gaussianized graph scattering transform, which is robust to signal and
graph manipulation. The decoder is a simple fully connected network that is
adapted to specific tasks, such as link prediction, signal generation on graphs
and full graph and signal generation. The training of our proposed system is
efficient since it is only applied to the decoder and the hardware requirements
are moderate. Numerical results demonstrate state-of-the-art performance of the
proposed system for both link prediction and graph and signal generation.Comment: 9 pages, 7 figures, 6 table
Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds
The Euclidean scattering transform was introduced nearly a decade ago to
improve the mathematical understanding of convolutional neural networks.
Inspired by recent interest in geometric deep learning, which aims to
generalize convolutional neural networks to manifold and graph-structured
domains, we define a geometric scattering transform on manifolds. Similar to
the Euclidean scattering transform, the geometric scattering transform is based
on a cascade of wavelet filters and pointwise nonlinearities. It is invariant
to local isometries and stable to certain types of diffeomorphisms. Empirical
results demonstrate its utility on several geometric learning tasks. Our
results generalize the deformation stability and local translation invariance
of Euclidean scattering, and demonstrate the importance of linking the used
filter structures to the underlying geometry of the data.Comment: 35 pages; 3 figures; 2 tables; v3: Revisions based on reviewer
comment
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
- …