17,983 research outputs found
Graph Neural Stochastic Differential Equations
We present a novel model Graph Neural Stochastic Differential Equations
(Graph Neural SDEs). This technique enhances the Graph Neural Ordinary
Differential Equations (Graph Neural ODEs) by embedding randomness into data
representation using Brownian motion. This inclusion allows for the assessment
of prediction uncertainty, a crucial aspect frequently missed in current
models. In our framework, we spotlight the \textit{Latent Graph Neural SDE}
variant, demonstrating its effectiveness. Through empirical studies, we find
that Latent Graph Neural SDEs surpass conventional models like Graph
Convolutional Networks and Graph Neural ODEs, especially in confidence
prediction, making them superior in handling out-of-distribution detection
across both static and spatio-temporal contexts.Comment: 9 main pages, 6 of appendix (15 in total), submitted for the Learning
on Graph (LoG) conferenc
Neural Ordinary Differential Equation Control of Dynamics on Graphs
We study the ability of neural networks to calculate feedback control signals
that steer trajectories of continuous time non-linear dynamical systems on
graphs, which we represent with neural ordinary differential equations (neural
ODEs). To do so, we present a neural-ODE control (NODEC) framework and find
that it can learn feedback control signals that drive graph dynamical systems
into desired target states. While we use loss functions that do not constrain
the control energy, our results show, in accordance with related work, that
NODEC produces low energy control signals. Finally, we evaluate the performance
and versatility of NODEC against well-known feedback controllers and deep
reinforcement learning. We use NODEC to generate feedback controls for systems
of more than one thousand coupled, non-linear ODEs that represent epidemic
processes and coupled oscillators.Comment: Fifth version improves and clears notatio
Universal Graph Random Features
We propose a novel random walk-based algorithm for unbiased estimation of
arbitrary functions of a weighted adjacency matrix, coined universal graph
random features (u-GRFs). This includes many of the most popular examples of
kernels defined on the nodes of a graph. Our algorithm enjoys subquadratic time
complexity with respect to the number of nodes, overcoming the notoriously
prohibitive cubic scaling of exact graph kernel evaluation. It can also be
trivially distributed across machines, permitting learning on much larger
networks. At the heart of the algorithm is a modulation function which
upweights or downweights the contribution from different random walks depending
on their lengths. We show that by parameterising it with a neural network we
can obtain u-GRFs that give higher-quality kernel estimates or perform
efficient, scalable kernel learning. We provide robust theoretical analysis and
support our findings with experiments including pointwise estimation of fixed
graph kernels, solving non-homogeneous graph ordinary differential equations,
node clustering and kernel regression on triangular meshes
MTP-GO: Graph-Based Probabilistic Multi-Agent Trajectory Prediction with Neural ODEs
Enabling resilient autonomous motion planning requires robust predictions of
surrounding road users' future behavior. In response to this need and the
associated challenges, we introduce our model titled MTP-GO. The model encodes
the scene using temporal graph neural networks to produce the inputs to an
underlying motion model. The motion model is implemented using neural ordinary
differential equations where the state-transition functions are learned with
the rest of the model. Multimodal probabilistic predictions are obtained by
combining the concept of mixture density networks and Kalman filtering. The
results illustrate the predictive capabilities of the proposed model across
various data sets, outperforming several state-of-the-art methods on a number
of metrics.Comment: Code: https://github.com/westny/mtp-g
Generalizing Graph ODE for Learning Complex System Dynamics across Environments
Learning multi-agent system dynamics has been extensively studied for various
real-world applications, such as molecular dynamics in biology. Most of the
existing models are built to learn single system dynamics from observed
historical data and predict the future trajectory. In practice, however, we
might observe multiple systems that are generated across different
environments, which differ in latent exogenous factors such as temperature and
gravity. One simple solution is to learn multiple environment-specific models,
but it fails to exploit the potential commonalities among the dynamics across
environments and offers poor prediction results where per-environment data is
sparse or limited. Here, we present GG-ODE (Generalized Graph Ordinary
Differential Equations), a machine learning framework for learning continuous
multi-agent system dynamics across environments. Our model learns system
dynamics using neural ordinary differential equations (ODE) parameterized by
Graph Neural Networks (GNNs) to capture the continuous interaction among
agents. We achieve the model generalization by assuming the dynamics across
different environments are governed by common physics laws that can be captured
via learning a shared ODE function. The distinct latent exogenous factors
learned for each environment are incorporated into the ODE function to account
for their differences. To improve model performance, we additionally design two
regularization losses to (1) enforce the orthogonality between the learned
initial states and exogenous factors via mutual information minimization; and
(2) reduce the temporal variance of learned exogenous factors within the same
system via contrastive learning. Experiments over various physical simulations
show that our model can accurately predict system dynamics, especially in the
long range, and can generalize well to new systems with few observations
- …