768 research outputs found
Multiresolution Equivariant Graph Variational Autoencoder
In this paper, we propose Multiresolution Equivariant Graph Variational
Autoencoders (MGVAE), the first hierarchical generative model to learn and
generate graphs in a multiresolution and equivariant manner. At each resolution
level, MGVAE employs higher order message passing to encode the graph while
learning to partition it into mutually exclusive clusters and coarsening into a
lower resolution that eventually creates a hierarchy of latent distributions.
MGVAE then constructs a hierarchical generative model to variationally decode
into a hierarchy of coarsened graphs. Importantly, our proposed framework is
end-to-end permutation equivariant with respect to node ordering. MGVAE
achieves competitive results with several generative tasks including general
graph generation, molecular generation, unsupervised molecular representation
learning to predict molecular properties, link prediction on citation graphs,
and graph-based image generation
Graph Attention-based Deep Reinforcement Learning for solving the Chinese Postman Problem with Load-dependent costs
Recently, Deep reinforcement learning (DRL) models have shown promising
results in solving routing problems. However, most DRL solvers are commonly
proposed to solve node routing problems, such as the Traveling Salesman Problem
(TSP). Meanwhile, there has been limited research on applying neural methods to
arc routing problems, such as the Chinese Postman Problem (CPP), since they
often feature irregular and complex solution spaces compared to TSP. To fill
these gaps, this paper proposes a novel DRL framework to address the CPP with
load-dependent costs (CPP-LC) (Corberan et al., 2018), which is a complex arc
routing problem with load constraints. The novelty of our method is two-fold.
First, we formulate the CPP-LC as a Markov Decision Process (MDP) sequential
model. Subsequently, we introduce an autoregressive model based on DRL, namely
Arc-DRL, consisting of an encoder and decoder to address the CPP-LC challenge
effectively. Such a framework allows the DRL model to work efficiently and
scalably to arc routing problems. Furthermore, we propose a new bio-inspired
meta-heuristic solution based on Evolutionary Algorithm (EA) for CPP-LC.
Extensive experiments show that Arc-DRL outperforms existing meta-heuristic
methods such as Iterative Local Search (ILS) and Variable Neighborhood Search
(VNS) proposed by (Corberan et al., 2018) on large benchmark datasets for
CPP-LC regarding both solution quality and running time; while the EA gives the
best solution quality with much more running time. We release our C++
implementations for metaheuristics such as EA, ILS and VNS along with the code
for data generation and our generated data at
https://github.com/HySonLab/Chinese_Postman_Proble
Multimodal Graph Learning for Modeling Emerging Pandemics with Big Data
Accurate forecasting and analysis of emerging pandemics play a crucial role
in effective public health management and decision-making. Traditional
approaches primarily rely on epidemiological data, overlooking other valuable
sources of information that could act as sensors or indicators of pandemic
patterns. In this paper, we propose a novel framework called MGL4MEP that
integrates temporal graph neural networks and multi-modal data for learning and
forecasting. We incorporate big data sources, including social media content,
by utilizing specific pre-trained language models and discovering the
underlying graph structure among users. This integration provides rich
indicators of pandemic dynamics through learning with temporal graph neural
networks. Extensive experiments demonstrate the effectiveness of our framework
in pandemic forecasting and analysis, outperforming baseline methods across
different areas, pandemic situations, and prediction horizons. The fusion of
temporal graph learning and multi-modal data enables a comprehensive
understanding of the pandemic landscape with less time lag, cheap cost, and
more potential information indicators
Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Hierarchical Structures
Contemporary graph learning algorithms are not well-defined for large
molecules since they do not consider the hierarchical interactions among the
atoms, which are essential to determine the molecular properties of
macromolecules. In this work, we propose Multiresolution Graph Transformers
(MGT), the first graph transformer architecture that can learn to represent
large molecules at multiple scales. MGT can learn to produce representations
for the atoms and group them into meaningful functional groups or repeating
units. We also introduce Wavelet Positional Encoding (WavePE), a new positional
encoding method that can guarantee localization in both spectral and spatial
domains. Our proposed model achieves competitive results on two macromolecule
datasets consisting of polymers and peptides, and one drug-like molecule
dataset. Importantly, our model outperforms other state-of-the-art methods and
achieves chemical accuracy in estimating molecular properties (e.g., GAP, HOMO
and LUMO) calculated by Density Functional Theory (DFT) in the polymers
dataset. Furthermore, the visualizations, including clustering results on
macromolecules and low-dimensional spaces of their representations, demonstrate
the capability of our methodology in learning to represent long-range and
hierarchical structures. Our PyTorch implementation is publicly available at
https://github.com/HySonLab/Multires-Graph-Transforme
Symmetry-preserving graph attention network to solve routing problems at multiple resolutions
Travelling Salesperson Problems (TSPs) and Vehicle Routing Problems (VRPs)
have achieved reasonable improvement in accuracy and computation time with the
adaptation of Machine Learning (ML) methods. However, none of the previous
works completely respects the symmetries arising from TSPs and VRPs including
rotation, translation, permutation, and scaling. In this work, we introduce the
first-ever completely equivariant model and training to solve combinatorial
problems. Furthermore, it is essential to capture the multiscale structure
(i.e. from local to global information) of the input graph, especially for the
cases of large and long-range graphs, while previous methods are limited to
extracting only local information that can lead to a local or sub-optimal
solution. To tackle the above limitation, we propose a Multiresolution scheme
in combination with Equivariant Graph Attention network (mEGAT) architecture,
which can learn the optimal route based on low-level and high-level graph
resolutions in an efficient way. In particular, our approach constructs a
hierarchy of coarse-graining graphs from the input graph, in which we try to
solve the routing problems on simple low-level graphs first, then utilize that
knowledge for the more complex high-level graphs. Experimentally, we have shown
that our model outperforms existing baselines and proved that symmetry
preservation and multiresolution are important recipes for solving
combinatorial problems in a data-driven manner. Our source code is publicly
available at https://github.com/HySonLab/Multires-NP-har
On the Connection Between MPNN and Graph Transformer
Graph Transformer (GT) recently has emerged as a new paradigm of graph
learning algorithms, outperforming the previously popular Message Passing
Neural Network (MPNN) on multiple benchmarks. Previous work (Kim et al., 2022)
shows that with proper position embedding, GT can approximate MPNN arbitrarily
well, implying that GT is at least as powerful as MPNN. In this paper, we study
the inverse connection and show that MPNN with virtual node (VN), a commonly
used heuristic with little theoretical understanding, is powerful enough to
arbitrarily approximate the self-attention layer of GT.
In particular, we first show that if we consider one type of linear
transformer, the so-called Performer/Linear Transformer (Choromanski et al.,
2020; Katharopoulos et al., 2020), then MPNN + VN with only O(1) depth and O(1)
width can approximate a self-attention layer in Performer/Linear Transformer.
Next, via a connection between MPNN + VN and DeepSets, we prove the MPNN + VN
with O(n^d) width and O(1) depth can approximate the self-attention layer
arbitrarily well, where d is the input feature dimension. Lastly, under some
assumptions, we provide an explicit construction of MPNN + VN with O(1) width
and O(n) depth approximating the self-attention layer in GT arbitrarily well.
On the empirical side, we demonstrate that 1) MPNN + VN is a surprisingly
strong baseline, outperforming GT on the recently proposed Long Range Graph
Benchmark (LRGB) dataset, 2) our MPNN + VN improves over early implementation
on a wide range of OGB datasets and 3) MPNN + VN outperforms Linear Transformer
and MPNN on the climate modeling task
Sparsity exploitation via discovering graphical models in multi-variate time-series forecasting
Graph neural networks (GNNs) have been widely applied in multi-variate
time-series forecasting (MTSF) tasks because of their capability in capturing
the correlations among different time-series. These graph-based learning
approaches improve the forecasting performance by discovering and understanding
the underlying graph structures, which represent the data correlation. When the
explicit prior graph structures are not available, most existing works cannot
guarantee the sparsity of the generated graphs that make the overall model
computational expensive and less interpretable. In this work, we propose a
decoupled training method, which includes a graph generating module and a GNNs
forecasting module. First, we use Graphical Lasso (or GraphLASSO) to directly
exploit the sparsity pattern from data to build graph structures in both static
and time-varying cases. Second, we fit these graph structures and the input
data into a Graph Convolutional Recurrent Network (GCRN) to train a forecasting
model. The experimental results on three real-world datasets show that our
novel approach has competitive performance against existing state-of-the-art
forecasting algorithms while providing sparse, meaningful and explainable graph
structures and reducing training time by approximately 40%. Our PyTorch
implementation is publicly available at https://github.com/HySonLab/GraphLASS
Fast Temporal Wavelet Graph Neural Networks
Spatio-temporal signals forecasting plays an important role in numerous
domains, especially in neuroscience and transportation. The task is challenging
due to the highly intricate spatial structure, as well as the non-linear
temporal dynamics of the network. To facilitate reliable and timely forecast
for the human brain and traffic networks, we propose the Fast Temporal Wavelet
Graph Neural Networks (FTWGNN) that is both time- and memory-efficient for
learning tasks on timeseries data with the underlying graph structure, thanks
to the theories of multiresolution analysis and wavelet theory on discrete
spaces. We employ Multiresolution Matrix Factorization (MMF) (Kondor et al.,
2014) to factorize the highly dense graph structure and compute the
corresponding sparse wavelet basis that allows us to construct fast wavelet
convolution as the backbone of our novel architecture. Experimental results on
real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show
that FTWGNN is competitive with the state-of-the-arts while maintaining a low
computational footprint. Our PyTorch implementation is publicly available at
https://github.com/HySonLab/TWGNNComment: arXiv admin note: text overlap with arXiv:2111.0194
- …