248 research outputs found
Explainable Spatio-Temporal Graph Neural Networks
Spatio-temporal graph neural networks (STGNNs) have gained popularity as a
powerful tool for effectively modeling spatio-temporal dependencies in diverse
real-world urban applications, including intelligent transportation and public
safety. However, the black-box nature of STGNNs limits their interpretability,
hindering their application in scenarios related to urban resource allocation
and policy formulation. To bridge this gap, we propose an Explainable
Spatio-Temporal Graph Neural Networks (STExplainer) framework that enhances
STGNNs with inherent explainability, enabling them to provide accurate
predictions and faithful explanations simultaneously. Our framework integrates
a unified spatio-temporal graph attention network with a positional information
fusion layer as the STG encoder and decoder, respectively. Furthermore, we
propose a structure distillation approach based on the Graph Information
Bottleneck (GIB) principle with an explainable objective, which is instantiated
by the STG encoder and decoder. Through extensive experiments, we demonstrate
that our STExplainer outperforms state-of-the-art baselines in terms of
predictive accuracy and explainability metrics (i.e., sparsity and fidelity) on
traffic and crime prediction tasks. Furthermore, our model exhibits superior
representation ability in alleviating data missing and sparsity issues. The
implementation code is available at: https://github.com/HKUDS/STExplainer.Comment: 32nd ACM International Conference on Information and Knowledge
Management (CIKM' 23
Classification of Interacting Dirac Semimetals
Topological band theory predicts a classification of
three-dimensional (3D) Dirac semimetals (DSMs) at the single-particle level.
Namely, an arbitrary number of identical bulk Dirac nodes will always remain
locally stable and gapless in the single-particle band spectrum, as long as the
protecting symmetry is preserved. In this work, we find that this
single-particle classification for -symmetric DSMs will break down to
in the presence of symmetry-preserving
electron interactions. Our theory is based on a dimensional reduction strategy
which reduces a 3D Dirac fermions to 1D building blocks, i.e., vortex-line
modes, while respecting all the key symmetries. Using bosonization technique,
we find that there exists a minimal number such that the
collection of vortex-line modes in copies of DSMs can be symmetrically
eliminated via four-fermion interactions. While this gapping mechanism does not
have any free-fermion counterpart, it yields an intuitive ``electron-trion
coupling" picture. By developing a topological field theory for DSMs and
further checking the anomaly-free condition, we independently arrive at the
same classification results. Our theory paves the way for understanding
topological crystalline semimetallic phases in the strongly correlated regime.Comment: 5+7 pages, 1 table, 1 figur
Effective field theories of topological crystalline insulators and topological crystals
We present a general approach to obtain effective field theories for
topological crystalline insulators whose low-energy theories are described by
massive Dirac fermions. We show that these phases are characterized by the
responses to spatially dependent mass parameters with interfaces. These mass
interfaces implement the dimensional reduction procedure such that the state of
interest is smoothly deformed into a topological crystal, which serves as a
representative state of a phase in the general classification. Effective field
theories are obtained by integrating out the massive Dirac fermions, and
various quantized topological terms are uncovered. Our approach can be
generalized to other crystalline symmetry protected topological phases and
provides a general strategy to derive effective field theories for such
crystalline topological phases.Comment: 20 pages, 10 figures, 1 table. Published version with minor change
Spatio-Temporal Meta Contrastive Learning
Spatio-temporal prediction is crucial in numerous real-world applications,
including traffic forecasting and crime prediction, which aim to improve public
transportation and safety management. Many state-of-the-art models demonstrate
the strong capability of spatio-temporal graph neural networks (STGNN) to
capture complex spatio-temporal correlations. However, despite their
effectiveness, existing approaches do not adequately address several key
challenges. Data quality issues, such as data scarcity and sparsity, lead to
data noise and a lack of supervised signals, which significantly limit the
performance of STGNN. Although recent STGNN models with contrastive learning
aim to address these challenges, most of them use pre-defined augmentation
strategies that heavily depend on manual design and cannot be customized for
different Spatio-Temporal Graph (STG) scenarios. To tackle these challenges, we
propose a new spatio-temporal contrastive learning (CL4ST) framework to encode
robust and generalizable STG representations via the STG augmentation paradigm.
Specifically, we design the meta view generator to automatically construct node
and edge augmentation views for each disentangled spatial and temporal graph in
a data-driven manner. The meta view generator employs meta networks with
parameterized generative model to customize the augmentations for each input.
This personalizes the augmentation strategies for every STG and endows the
learning framework with spatio-temporal-aware information. Additionally, we
integrate a unified spatio-temporal graph attention network with the proposed
meta view generator and two-branch graph contrastive learning paradigms.
Extensive experiments demonstrate that our CL4ST significantly improves
performance over various state-of-the-art baselines in traffic and crime
prediction.Comment: 32nd ACM International Conference on Information and Knowledge
Management (CIKM' 23
Learning Efficient Convolutional Networks through Irregular Convolutional Kernels
As deep neural networks are increasingly used in applications suited for
low-power devices, a fundamental dilemma becomes apparent: the trend is to grow
models to absorb increasing data that gives rise to memory intensive; however
low-power devices are designed with very limited memory that can not store
large models. Parameters pruning is critical for deep model deployment on
low-power devices. Existing efforts mainly focus on designing highly efficient
structures or pruning redundant connections for networks. They are usually
sensitive to the tasks or relay on dedicated and expensive hashing storage
strategies. In this work, we introduce a novel approach for achieving a
lightweight model from the views of reconstructing the structure of
convolutional kernels and efficient storage. Our approach transforms a
traditional square convolution kernel to line segments, and automatically learn
a proper strategy for equipping these line segments to model diverse features.
The experimental results indicate that our approach can massively reduce the
number of parameters (pruned 69% on DenseNet-40) and calculations (pruned 59%
on DenseNet-40) while maintaining acceptable performance (only lose less than
2% accuracy)
PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-Tuning
Multimedia online platforms (e.g., Amazon, TikTok) have greatly benefited
from the incorporation of multimedia (e.g., visual, textual, and acoustic)
content into their personal recommender systems. These modalities provide
intuitive semantics that facilitate modality-aware user preference modeling.
However, two key challenges in multi-modal recommenders remain unresolved: i)
The introduction of multi-modal encoders with a large number of additional
parameters causes overfitting, given high-dimensional multi-modal features
provided by extractors (e.g., ViT, BERT). ii) Side information inevitably
introduces inaccuracies and redundancies, which skew the modality-interaction
dependency from reflecting true user preference. To tackle these problems, we
propose to simplify and empower recommenders through Multi-modal Knowledge
Distillation (PromptMM) with the prompt-tuning that enables adaptive quality
distillation. Specifically, PromptMM conducts model compression through
distilling u-i edge relationship and multi-modal node content from cumbersome
teachers to relieve students from the additional feature reduction parameters.
To bridge the semantic gap between multi-modal context and collaborative
signals for empowering the overfitting teacher, soft prompt-tuning is
introduced to perform student task-adaptive. Additionally, to adjust the impact
of inaccuracies in multimedia data, a disentangled multi-modal list-wise
distillation is developed with modality-aware re-weighting mechanism.
Experiments on real-world data demonstrate PromptMM's superiority over existing
techniques. Ablation tests confirm the effectiveness of key components.
Additional tests show the efficiency and effectiveness.Comment: WWW 202
- …