32 research outputs found
Towards Complex Dynamic Physics System Simulation with Graph Neural ODEs
The great learning ability of deep learning models facilitates us to
comprehend the real physical world, making learning to simulate complicated
particle systems a promising endeavour. However, the complex laws of the
physical world pose significant challenges to the learning based simulations,
such as the varying spatial dependencies between interacting particles and
varying temporal dependencies between particle system states in different time
stamps, which dominate particles' interacting behaviour and the physical
systems' evolution patterns. Existing learning based simulation methods fail to
fully account for the complexities, making them unable to yield satisfactory
simulations. To better comprehend the complex physical laws, this paper
proposes a novel learning based simulation model- Graph Networks with
Spatial-Temporal neural Ordinary Equations (GNSTODE)- that characterizes the
varying spatial and temporal dependencies in particle systems using a united
end-to-end framework. Through training with real-world particle-particle
interaction observations, GNSTODE is able to simulate any possible particle
systems with high precisions. We empirically evaluate GNSTODE's simulation
performance on two real-world particle systems, Gravity and Coulomb, with
varying levels of spatial and temporal dependencies. The results show that the
proposed GNSTODE yields significantly better simulations than state-of-the-art
learning based simulation methods, which proves that GNSTODE can serve as an
effective solution to particle simulations in real-world application.Comment: 12 pages,5 figures, 6 tables, 49 reference
Multi-modal Multi-kernel Graph Learning for Autism Prediction and Biomarker Discovery
Due to its complexity, graph learning-based multi-modal integration and
classification is one of the most challenging obstacles for disease prediction.
To effectively offset the negative impact between modalities in the process of
multi-modal integration and extract heterogeneous information from graphs, we
propose a novel method called MMKGL (Multi-modal Multi-Kernel Graph Learning).
For the problem of negative impact between modalities, we propose a multi-modal
graph embedding module to construct a multi-modal graph. Different from
conventional methods that manually construct static graphs for all modalities,
each modality generates a separate graph by adaptive learning, where a function
graph and a supervision graph are introduced for optimization during the
multi-graph fusion embedding process. We then propose a multi-kernel graph
learning module to extract heterogeneous information from the multi-modal
graph. The information in the multi-modal graph at different levels is
aggregated by convolutional kernels with different receptive field sizes,
followed by generating a cross-kernel discovery tensor for disease prediction.
Our method is evaluated on the benchmark Autism Brain Imaging Data Exchange
(ABIDE) dataset and outperforms the state-of-the-art methods. In addition,
discriminative brain regions associated with autism are identified by our
model, providing guidance for the study of autism pathology
Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs
Multivariate time series forecasting has long received significant attention
in real-world applications, such as energy consumption and traffic prediction.
While recent methods demonstrate good forecasting abilities, they have three
fundamental limitations. (i) Discrete neural architectures: Interlacing
individually parameterized spatial and temporal blocks to encode rich
underlying patterns leads to discontinuous latent state trajectories and higher
forecasting numerical errors. (ii) High complexity: Discrete approaches
complicate models with dedicated designs and redundant parameters, leading to
higher computational and memory overheads. (iii) Reliance on graph priors:
Relying on predefined static graph structures limits their effectiveness and
practicability in real-world applications. In this paper, we address all the
above limitations by proposing a continuous model to forecast
ultivariate ime series with dynamic raph
neural rdinary ifferential quations
(). Specifically, we first abstract multivariate time series
into dynamic graphs with time-evolving node features and unknown graph
structures. Then, we design and solve a neural ODE to complement missing graph
topologies and unify both spatial and temporal message passing, allowing deeper
graph propagation and fine-grained temporal information aggregation to
characterize stable and precise latent spatial-temporal dynamics. Our
experiments demonstrate the superiorities of from various
perspectives on five time series benchmark datasets.Comment: 14 pages, 6 figures, 5 table
A Survey on Fairness-aware Recommender Systems
As information filtering services, recommender systems have extremely
enriched our daily life by providing personalized suggestions and facilitating
people in decision-making, which makes them vital and indispensable to human
society in the information era. However, as people become more dependent on
them, recent studies show that recommender systems potentially own
unintentional impacts on society and individuals because of their unfairness
(e.g., gender discrimination in job recommendations). To develop trustworthy
services, it is crucial to devise fairness-aware recommender systems that can
mitigate these bias issues. In this survey, we summarise existing methodologies
and practices of fairness in recommender systems. Firstly, we present concepts
of fairness in different recommendation scenarios, comprehensively categorize
current advances, and introduce typical methods to promote fairness in
different stages of recommender systems. Next, after introducing datasets and
evaluation metrics applied to assess the fairness of recommender systems, we
will delve into the significant influence that fairness-aware recommender
systems exert on real-world industrial applications. Subsequently, we highlight
the connection between fairness and other principles of trustworthy recommender
systems, aiming to consider trustworthiness principles holistically while
advocating for fairness. Finally, we summarize this review, spotlighting
promising opportunities in comprehending concepts, frameworks, the balance
between accuracy and fairness, and the ties with trustworthiness, with the
ultimate goal of fostering the development of fairness-aware recommender
systems.Comment: 27 pages, 9 figure
Graph Neural Networks for Graphs with Heterophily: A Survey
Recent years have witnessed fast developments of graph neural networks (GNNs)
that have benefited myriads of graph analytic tasks and applications. In
general, most GNNs depend on the homophily assumption that nodes belonging to
the same class are more likely to be connected. However, as a ubiquitous graph
property in numerous real-world scenarios, heterophily, i.e., nodes with
different labels tend to be linked, significantly limits the performance of
tailor-made homophilic GNNs. Hence, GNNs for heterophilic graphs are gaining
increasing research attention to enhance graph learning with heterophily. In
this paper, we provide a comprehensive review of GNNs for heterophilic graphs.
Specifically, we propose a systematic taxonomy that essentially governs
existing heterophilic GNN models, along with a general summary and detailed
analysis. Furthermore, we discuss the correlation between graph heterophily and
various graph research domains, aiming to facilitate the development of more
effective GNNs across a spectrum of practical applications and learning tasks
in the graph research community. In the end, we point out the potential
directions to advance and stimulate more future research and applications on
heterophilic graph learning with GNNs.Comment: 22 page
GOODAT: Towards Test-time Graph Out-of-Distribution Detection
Graph neural networks (GNNs) have found widespread application in modeling
graph data across diverse domains. While GNNs excel in scenarios where the
testing data shares the distribution of their training counterparts (in
distribution, ID), they often exhibit incorrect predictions when confronted
with samples from an unfamiliar distribution (out-of-distribution, OOD). To
identify and reject OOD samples with GNNs, recent studies have explored graph
OOD detection, often focusing on training a specific model or modifying the
data on top of a well-trained GNN. Despite their effectiveness, these methods
come with heavy training resources and costs, as they need to optimize the
GNN-based models on training data. Moreover, their reliance on modifying the
original GNNs and accessing training data further restricts their universality.
To this end, this paper introduces a method to detect Graph Out-of-Distribution
At Test-time (namely GOODAT), a data-centric, unsupervised, and plug-and-play
solution that operates independently of training data and modifications of GNN
architecture. With a lightweight graph masker, GOODAT can learn informative
subgraphs from test samples, enabling the capture of distinct graph patterns
between OOD and ID samples. To optimize the graph masker, we meticulously
design three unsupervised objective functions based on the graph information
bottleneck principle, motivating the masker to capture compact yet informative
subgraphs for OOD detection. Comprehensive evaluations confirm that our GOODAT
method outperforms state-of-the-art benchmarks across a variety of real-world
datasets. The code is available at Github: https://github.com/Ee1s/GOODATComment: 9 pages, 5 figure
A Survey on Graph Neural Networks for Time Series: Forecasting, Classification, Imputation, and Anomaly Detection
Time series are the primary data type used to record dynamic system
measurements and generated in great volume by both physical sensors and online
processes (virtual sensors). Time series analytics is therefore crucial to
unlocking the wealth of information implicit in available data. With the recent
advancements in graph neural networks (GNNs), there has been a surge in
GNN-based approaches for time series analysis. Approaches can explicitly model
inter-temporal and inter-variable relationships, which traditional and other
deep neural network-based methods struggle to do. In this survey, we provide a
comprehensive review of graph neural networks for time series analysis
(GNN4TS), encompassing four fundamental dimensions: Forecasting,
classification, anomaly detection, and imputation. Our aim is to guide
designers and practitioners to understand, build applications, and advance
research of GNN4TS. At first, we provide a comprehensive task-oriented taxonomy
of GNN4TS. Then, we present and discuss representative research works and,
finally, discuss mainstream applications of GNN4TS. A comprehensive discussion
of potential future research directions completes the survey. This survey, for
the first time, brings together a vast array of knowledge on GNN-based time
series research, highlighting both the foundations, practical applications, and
opportunities of graph neural networks for time series analysis.Comment: 27 pages, 6 figures, 5 table
Graph Spatiotemporal Process for Multivariate Time Series Anomaly Detection with Missing Values
The detection of anomalies in multivariate time series data is crucial for
various practical applications, including smart power grids, traffic flow
forecasting, and industrial process control. However, real-world time series
data is usually not well-structured, posting significant challenges to existing
approaches: (1) The existence of missing values in multivariate time series
data along variable and time dimensions hinders the effective modeling of
interwoven spatial and temporal dependencies, resulting in important patterns
being overlooked during model training; (2) Anomaly scoring with
irregularly-sampled observations is less explored, making it difficult to use
existing detectors for multivariate series without fully-observed values. In
this work, we introduce a novel framework called GST-Pro, which utilizes a
graph spatiotemporal process and anomaly scorer to tackle the aforementioned
challenges in detecting anomalies on irregularly-sampled multivariate time
series. Our approach comprises two main components. First, we propose a graph
spatiotemporal process based on neural controlled differential equations. This
process enables effective modeling of multivariate time series from both
spatial and temporal perspectives, even when the data contains missing values.
Second, we present a novel distribution-based anomaly scoring mechanism that
alleviates the reliance on complete uniform observations. By analyzing the
predictions of the graph spatiotemporal process, our approach allows anomalies
to be easily detected. Our experimental results show that the GST-Pro method
can effectively detect anomalies in time series data and outperforms
state-of-the-art methods, regardless of whether there are missing values
present in the data. Our code is available: https://github.com/huankoh/GST-Pro.Comment: Accepted by Information Fusio
Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Time series forecasting holds significant importance in many real-world
dynamic systems and has been extensively studied. Unlike natural language
process (NLP) and computer vision (CV), where a single large model can tackle
multiple tasks, models for time series forecasting are often specialized,
necessitating distinct designs for different tasks and applications. While
pre-trained foundation models have made impressive strides in NLP and CV, their
development in time series domains has been constrained by data sparsity.
Recent studies have revealed that large language models (LLMs) possess robust
pattern recognition and reasoning abilities over complex sequences of tokens.
However, the challenge remains in effectively aligning the modalities of time
series data and natural language to leverage these capabilities. In this work,
we present Time-LLM, a reprogramming framework to repurpose LLMs for general
time series forecasting with the backbone language models kept intact. We begin
by reprogramming the input time series with text prototypes before feeding it
into the frozen LLM to align the two modalities. To augment the LLM's ability
to reason with time series data, we propose Prompt-as-Prefix (PaP), which
enriches the input context and directs the transformation of reprogrammed input
patches. The transformed time series patches from the LLM are finally projected
to obtain the forecasts. Our comprehensive evaluations demonstrate that
Time-LLM is a powerful time series learner that outperforms state-of-the-art,
specialized forecasting models. Moreover, Time-LLM excels in both few-shot and
zero-shot learning scenarios.Comment: Accepted by the 12th International Conference on Learning
Representations (ICLR 2024