1,070 research outputs found
On the equivalence between graph isomorphism testing and function approximation with GNNs
Graph neural networks (GNNs) have achieved lots of success on
graph-structured data. In the light of this, there has been increasing interest
in studying their representation power. One line of work focuses on the
universal approximation of permutation-invariant functions by certain classes
of GNNs, and another demonstrates the limitation of GNNs via graph isomorphism
tests.
Our work connects these two perspectives and proves their equivalence. We
further develop a framework of the representation power of GNNs with the
language of sigma-algebra, which incorporates both viewpoints. Using this
framework, we compare the expressive power of different classes of GNNs as well
as other methods on graphs. In particular, we prove that order-2 Graph
G-invariant networks fail to distinguish non-isomorphic regular graphs with the
same degree. We then extend them to a new architecture, Ring-GNNs, which
succeeds on distinguishing these graphs and provides improvements on real-world
social network datasets
On the Expressive Power of Geometric Graph Neural Networks
The expressive power of Graph Neural Networks (GNNs) has been studied
extensively through the Weisfeiler-Leman (WL) graph isomorphism test. However,
standard GNNs and the WL framework are inapplicable for geometric graphs
embedded in Euclidean space, such as biomolecules, materials, and other
physical systems. In this work, we propose a geometric version of the WL test
(GWL) for discriminating geometric graphs while respecting the underlying
physical symmetries: permutations, rotation, reflection, and translation. We
use GWL to characterise the expressive power of geometric GNNs that are
invariant or equivariant to physical symmetries in terms of distinguishing
geometric graphs. GWL unpacks how key design choices influence geometric GNN
expressivity: (1) Invariant layers have limited expressivity as they cannot
distinguish one-hop identical geometric graphs; (2) Equivariant layers
distinguish a larger class of graphs by propagating geometric information
beyond local neighbourhoods; (3) Higher order tensors and scalarisation enable
maximally powerful geometric GNNs; and (4) GWL's discrimination-based
perspective is equivalent to universal approximation. Synthetic experiments
supplementing our results are available at
https://github.com/chaitjo/geometric-gnn-dojoComment: NeurIPS 2022 Workshop on Symmetry and Geometry in Neural
Representation
Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds
The Euclidean scattering transform was introduced nearly a decade ago to
improve the mathematical understanding of convolutional neural networks.
Inspired by recent interest in geometric deep learning, which aims to
generalize convolutional neural networks to manifold and graph-structured
domains, we define a geometric scattering transform on manifolds. Similar to
the Euclidean scattering transform, the geometric scattering transform is based
on a cascade of wavelet filters and pointwise nonlinearities. It is invariant
to local isometries and stable to certain types of diffeomorphisms. Empirical
results demonstrate its utility on several geometric learning tasks. Our
results generalize the deformation stability and local translation invariance
of Euclidean scattering, and demonstrate the importance of linking the used
filter structures to the underlying geometry of the data.Comment: 35 pages; 3 figures; 2 tables; v3: Revisions based on reviewer
comment
Expressive Sign Equivariant Networks for Spectral Geometric Learning
Recent work has shown the utility of developing machine learning models that
respect the structure and symmetries of eigenvectors. These works promote sign
invariance, since for any eigenvector v the negation -v is also an eigenvector.
However, we show that sign invariance is theoretically limited for tasks such
as building orthogonally equivariant models and learning node positional
encodings for link prediction in graphs. In this work, we demonstrate the
benefits of sign equivariance for these tasks. To obtain these benefits, we
develop novel sign equivariant neural network architectures. Our models are
based on a new analytic characterization of sign equivariant polynomials and
thus inherit provable expressiveness properties. Controlled synthetic
experiments show that our networks can achieve the theoretically predicted
benefits of sign equivariant models. Code is available at
https://github.com/cptq/Sign-Equivariant-Nets.Comment: NeurIPS 2023 Spotligh
- …