162 research outputs found
Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds
The Euclidean scattering transform was introduced nearly a decade ago to
improve the mathematical understanding of convolutional neural networks.
Inspired by recent interest in geometric deep learning, which aims to
generalize convolutional neural networks to manifold and graph-structured
domains, we define a geometric scattering transform on manifolds. Similar to
the Euclidean scattering transform, the geometric scattering transform is based
on a cascade of wavelet filters and pointwise nonlinearities. It is invariant
to local isometries and stable to certain types of diffeomorphisms. Empirical
results demonstrate its utility on several geometric learning tasks. Our
results generalize the deformation stability and local translation invariance
of Euclidean scattering, and demonstrate the importance of linking the used
filter structures to the underlying geometry of the data.Comment: 35 pages; 3 figures; 2 tables; v3: Revisions based on reviewer
comment
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Manifold Filter-Combine Networks
We introduce a large class of manifold neural networks (MNNs) which we call
Manifold Filter-Combine Networks. This class includes as special cases, the
MNNs considered in previous work by Wang, Ruiz, and Ribeiro, the manifold
scattering transform (a wavelet-based model of neural networks), and other
interesting examples not previously considered in the literature such as the
manifold equivalent of Kipf and Welling's graph convolutional network. We then
consider a method, based on building a data-driven graph, for implementing such
networks when one does not have global knowledge of the manifold, but merely
has access to finitely many sample points. We provide sufficient conditions for
the network to provably converge to its continuum limit as the number of sample
points tends to infinity. Unlike previous work (which focused on specific MNN
architectures and graph constructions), our rate of convergence does not
explicitly depend on the number of filters used. Moreover, it exhibits linear
dependence on the depth of the network rather than the exponential dependence
obtained previously
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
A Convergence Rate for Manifold Neural Networks
High-dimensional data arises in numerous applications, and the rapidly
developing field of geometric deep learning seeks to develop neural network
architectures to analyze such data in non-Euclidean domains, such as graphs and
manifolds. Recent work by Z. Wang, L. Ruiz, and A. Ribeiro has introduced a
method for constructing manifold neural networks using the spectral
decomposition of the Laplace Beltrami operator. Moreover, in this work, the
authors provide a numerical scheme for implementing such neural networks when
the manifold is unknown and one only has access to finitely many sample points.
The authors show that this scheme, which relies upon building a data-driven
graph, converges to the continuum limit as the number of sample points tends to
infinity. Here, we build upon this result by establishing a rate of convergence
that depends on the intrinsic dimension of the manifold but is independent of
the ambient dimension. We also discuss how the rate of convergence depends on
the depth of the network and the number of filters used in each layer
Slepian Wavelets for the Analysis of Incomplete Data on Manifolds
Many fields in science and engineering measure data that inherently live on non-Euclidean geometries, such as the sphere. Techniques developed in the Euclidean setting must be extended to other geometries. Due to recent interest in geometric deep learning, analogues of Euclidean techniques must also handle general manifolds or graphs. Often, data are only observed over partial regions of manifolds, and thus standard whole-manifold techniques may not yield accurate predictions. In this thesis, a new wavelet basis is designed for datasets like these.
Although many definitions of spherical convolutions exist, none fully emulate the Euclidean definition. A novel spherical convolution is developed, designed to tackle the shortcomings of existing methods. The so-called sifting convolution exploits the sifting property of the Dirac delta and follows by the inner product of a function with the translated version of another. This translation operator is analogous to the Euclidean translation in harmonic space and exhibits some useful properties. In particular, the sifting convolution supports directional kernels; has an output that remains on the sphere; and is efficient to compute. The convolution is entirely generic and thus may be used with any set of basis functions. An application of the sifting convolution with a topographic map of the Earth demonstrates that it supports directional kernels to perform anisotropic filtering.
Slepian wavelets are built upon the eigenfunctions of the Slepian concentration problem of the manifold - a set of bandlimited functions which are maximally concentrated within a given region. Wavelets are constructed through a tiling of the Slepian harmonic line by leveraging the existing scale-discretised framework. A straightforward denoising formalism demonstrates a boost in signal-to-noise for both a spherical and general manifold example. Whilst these wavelets were inspired by spherical datasets, like in cosmology, the wavelet construction may be utilised for manifold or graph data
Graph Neural Networks on SPD Manifolds for Motor Imagery Classification: A Perspective from the Time-Frequency Analysis
Motor imagery (MI) classification is one of the most widely-concern research
topics in Electroencephalography (EEG)-based brain-computer interfaces (BCIs)
with extensive industry value. The MI-EEG classifiers' tendency has changed
fundamentally over the past twenty years, while classifiers' performance is
gradually increasing. In particular, owing to the need for characterizing
signals' non-Euclidean inherence, the first geometric deep learning (GDL)
framework, Tensor-CSPNet, has recently emerged in the BCI study. In essence,
Tensor-CSPNet is a deep learning-based classifier on the second-order
statistics of EEGs. In contrast to the first-order statistics, using these
second-order statistics is the classical treatment of EEG signals, and the
discriminative information contained in these second-order statistics is
adequate for MI-EEG classification. In this study, we present another GDL
classifier for MI-EEG classification called Graph-CSPNet, using graph-based
techniques to simultaneously characterize the EEG signals in both the time and
frequency domains. It is realized from the perspective of the time-frequency
analysis that profoundly influences signal processing and BCI studies. Contrary
to Tensor-CSPNet, the architecture of Graph-CSPNet is further simplified with
more flexibility to cope with variable time-frequency resolution for signal
segmentation to capture the localized fluctuations. In the experiments,
Graph-CSPNet is evaluated on subject-specific scenarios from two well-used
MI-EEG datasets and produces near-optimal classification accuracies.Comment: 16 pages, 5 figures, 9 Tables; This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notice,
after which this version may no longer be accessibl
- âŠ