867 research outputs found
Natural Graph Wavelet Packet Dictionaries
We introduce a set of novel multiscale basis transforms for signals on graphs
that utilize their "dual" domains by incorporating the "natural" distances
between graph Laplacian eigenvectors, rather than simply using the eigenvalue
ordering. These basis dictionaries can be seen as generalizations of the
classical Shannon wavelet packet dictionary to arbitrary graphs, and do not
rely on the frequency interpretation of Laplacian eigenvalues. We describe the
algorithms (involving either vector rotations or orthogonalizations) to
construct these basis dictionaries, use them to efficiently approximate graph
signals through the best basis search, and demonstrate the strengths of these
basis dictionaries for graph signals measured on sunflower graphs and street
networks
Fast Temporal Wavelet Graph Neural Networks
Spatio-temporal signals forecasting plays an important role in numerous
domains, especially in neuroscience and transportation. The task is challenging
due to the highly intricate spatial structure, as well as the non-linear
temporal dynamics of the network. To facilitate reliable and timely forecast
for the human brain and traffic networks, we propose the Fast Temporal Wavelet
Graph Neural Networks (FTWGNN) that is both time- and memory-efficient for
learning tasks on timeseries data with the underlying graph structure, thanks
to the theories of multiresolution analysis and wavelet theory on discrete
spaces. We employ Multiresolution Matrix Factorization (MMF) (Kondor et al.,
2014) to factorize the highly dense graph structure and compute the
corresponding sparse wavelet basis that allows us to construct fast wavelet
convolution as the backbone of our novel architecture. Experimental results on
real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show
that FTWGNN is competitive with the state-of-the-arts while maintaining a low
computational footprint. Our PyTorch implementation is publicly available at
https://github.com/HySonLab/TWGNNComment: arXiv admin note: text overlap with arXiv:2111.0194
Reduction of dynamical biochemical reaction networks in computational biology
Biochemical networks are used in computational biology, to model the static
and dynamical details of systems involved in cell signaling, metabolism, and
regulation of gene expression. Parametric and structural uncertainty, as well
as combinatorial explosion are strong obstacles against analyzing the dynamics
of large models of this type. Multi-scaleness is another property of these
networks, that can be used to get past some of these obstacles. Networks with
many well separated time scales, can be reduced to simpler networks, in a way
that depends only on the orders of magnitude and not on the exact values of the
kinetic parameters. The main idea used for such robust simplifications of
networks is the concept of dominance among model elements, allowing
hierarchical organization of these elements according to their effects on the
network dynamics. This concept finds a natural formulation in tropical
geometry. We revisit, in the light of these new ideas, the main approaches to
model reduction of reaction networks, such as quasi-steady state and
quasi-equilibrium approximations, and provide practical recipes for model
reduction of linear and nonlinear networks. We also discuss the application of
model reduction to backward pruning machine learning techniques
Asymptotology of Chemical Reaction Networks
The concept of the limiting step is extended to the asymptotology of
multiscale reaction networks. Complete theory for linear networks with well
separated reaction rate constants is developed. We present algorithms for
explicit approximations of eigenvalues and eigenvectors of kinetic matrix.
Accuracy of estimates is proven. Performance of the algorithms is demonstrated
on simple examples. Application of algorithms to nonlinear systems is
discussed.Comment: 23 pages, 8 figures, 84 refs, Corrected Journal Versio
- …