540 research outputs found
Provably expressive temporal graph networks
Temporal graph networks (TGNs) have gained prominence as models for embedding
dynamic interactions, but little is known about their theoretical
underpinnings. We establish fundamental results about the representational
power and limits of the two main categories of TGNs: those that aggregate
temporal walks (WA-TGNs), and those that augment local message passing with
recurrent memory modules (MP-TGNs). Specifically, novel constructions reveal
the inadequacy of MP-TGNs and WA-TGNs, proving that neither category subsumes
the other. We extend the 1-WL (Weisfeiler-Leman) test to temporal graphs, and
show that the most powerful MP-TGNs should use injective updates, as in this
case they become as expressive as the temporal WL. Also, we show that
sufficiently deep MP-TGNs cannot benefit from memory, and MP/WA-TGNs fail to
compute graph properties such as girth.
These theoretical insights lead us to PINT -- a novel architecture that
leverages injective temporal message passing and relative positional features.
Importantly, PINT is provably more expressive than both MP-TGNs and WA-TGNs.
PINT significantly outperforms existing TGNs on several real-world benchmarks.Comment: Accepted to NeurIPS 202
Expressive Sign Equivariant Networks for Spectral Geometric Learning
Recent work has shown the utility of developing machine learning models that
respect the structure and symmetries of eigenvectors. These works promote sign
invariance, since for any eigenvector v the negation -v is also an eigenvector.
However, we show that sign invariance is theoretically limited for tasks such
as building orthogonally equivariant models and learning node positional
encodings for link prediction in graphs. In this work, we demonstrate the
benefits of sign equivariance for these tasks. To obtain these benefits, we
develop novel sign equivariant neural network architectures. Our models are
based on a new analytic characterization of sign equivariant polynomials and
thus inherit provable expressiveness properties. Controlled synthetic
experiments show that our networks can achieve the theoretically predicted
benefits of sign equivariant models. Code is available at
https://github.com/cptq/Sign-Equivariant-Nets.Comment: NeurIPS 2023 Spotligh
On Positional and Structural Node Features for Graph Neural Networks on Non-attributed Graphs
Graph neural networks (GNNs) have been widely used in various graph-related
problems such as node classification and graph classification, where the
superior performance is mainly established when natural node features are
available. However, it is not well understood how GNNs work without natural
node features, especially regarding the various ways to construct artificial
ones. In this paper, we point out the two types of artificial node
features,i.e., positional and structural node features, and provide insights on
why each of them is more appropriate for certain tasks,i.e., positional node
classification, structural node classification, and graph classification.
Extensive experimental results on 10 benchmark datasets validate our insights,
thus leading to a practical guideline on the choices between different
artificial node features for GNNs on non-attributed graphs. The code is
available at https://github.com/zjzijielu/gnn-exp/.Comment: This paper has been accepted to the Sixth International Workshop on
Deep Learning on Graphs (DLG-KDD'21) (co-located with KDD'21
- …