12,822 research outputs found
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Edge-Varying Fourier Graph Networks for Multivariate Time Series Forecasting
The key problem in multivariate time series (MTS) analysis and forecasting
aims to disclose the underlying couplings between variables that drive the
co-movements. Considerable recent successful MTS methods are built with graph
neural networks (GNNs) due to their essential capacity for relational modeling.
However, previous work often used a static graph structure of time-series
variables for modeling MTS failing to capture their ever-changing correlations
over time. To this end, a fully-connected supra-graph connecting any two
variables at any two timestamps is adaptively learned to capture the
high-resolution variable dependencies via an efficient graph convolutional
network. Specifically, we construct the Edge-Varying Fourier Graph Networks
(EV-FGN) equipped with Fourier Graph Shift Operator (FGSO) which efficiently
performs graph convolution in the frequency domain. As a result, a
high-efficiency scale-free parameter learning scheme is derived for MTS
analysis and forecasting according to the convolution theorem. Extensive
experiments show that EV-FGN outperforms state-of-the-art methods on seven
real-world MTS datasets
Graph Signal Processing: Overview, Challenges and Applications
Research in Graph Signal Processing (GSP) aims to develop tools for
processing data defined on irregular graph domains. In this paper we first
provide an overview of core ideas in GSP and their connection to conventional
digital signal processing. We then summarize recent developments in developing
basic GSP tools, including methods for sampling, filtering or graph learning.
Next, we review progress in several application areas using GSP, including
processing and analysis of sensor network data, biological data, and
applications to image processing and machine learning. We finish by providing a
brief historical perspective to highlight how concepts recently developed in
GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
Open two-species exclusion processes with integrable boundaries
We give a complete classification of integrable Markovian boundary conditions
for the asymmetric simple exclusion process with two species (or classes) of
particles. Some of these boundary conditions lead to non-vanishing particle
currents for each species. We explain how the stationary state of all these
models can be expressed in a matrix product form, starting from two key
components, the Zamolodchikov-Faddeev and Ghoshal-Zamolodchikov relations. This
statement is illustrated by studying in detail a specific example, for which
the matrix Ansatz (involving 9 generators) is explicitly constructed and
physical observables (such as currents, densities) calculated.Comment: 19 pages; typos corrected, more details on the Matrix Ansatz algebr
A Multiscale Pyramid Transform for Graph Signals
Multiscale transforms designed to process analog and discrete-time signals
and images cannot be directly applied to analyze high-dimensional data residing
on the vertices of a weighted graph, as they do not capture the intrinsic
geometric structure of the underlying graph data domain. In this paper, we
adapt the Laplacian pyramid transform for signals on Euclidean domains so that
it can be used to analyze high-dimensional data residing on the vertices of a
weighted graph. Our approach is to study existing methods and develop new
methods for the four fundamental operations of graph downsampling, graph
reduction, and filtering and interpolation of signals on graphs. Equipped with
appropriate notions of these operations, we leverage the basic multiscale
constructs and intuitions from classical signal processing to generate a
transform that yields both a multiresolution of graphs and an associated
multiresolution of a graph signal on the underlying sequence of graphs.Comment: 16 pages, 13 figure
Task-based adaptive multiresolution for time-space multi-scale reaction-diffusion systems on multi-core architectures
A new solver featuring time-space adaptation and error control has been
recently introduced to tackle the numerical solution of stiff
reaction-diffusion systems. Based on operator splitting, finite volume adaptive
multiresolution and high order time integrators with specific stability
properties for each operator, this strategy yields high computational
efficiency for large multidimensional computations on standard architectures
such as powerful workstations. However, the data structure of the original
implementation, based on trees of pointers, provides limited opportunities for
efficiency enhancements, while posing serious challenges in terms of parallel
programming and load balancing. The present contribution proposes a new
implementation of the whole set of numerical methods including Radau5 and
ROCK4, relying on a fully different data structure together with the use of a
specific library, TBB, for shared-memory, task-based parallelism with
work-stealing. The performance of our implementation is assessed in a series of
test-cases of increasing difficulty in two and three dimensions on multi-core
and many-core architectures, demonstrating high scalability
- …