537 research outputs found
Gradual Weisfeiler-Leman: Slow and Steady Wins the Race
The classical Weisfeiler-Leman algorithm aka color refinement is fundamental
for graph learning and central for successful graph kernels and graph neural
networks. Originally developed for graph isomorphism testing, the algorithm
iteratively refines vertex colors. On many datasets, the stable coloring is
reached after a few iterations and the optimal number of iterations for machine
learning tasks is typically even lower. This suggests that the colors diverge
too fast, defining a similarity that is too coarse. We generalize the concept
of color refinement and propose a framework for gradual neighborhood
refinement, which allows a slower convergence to the stable coloring and thus
provides a more fine-grained refinement hierarchy and vertex similarity. We
assign new colors by clustering vertex neighborhoods, replacing the original
injective color assignment function. Our approach is used to derive new
variants of existing graph kernels and to approximate the graph edit distance
via optimal assignments regarding vertex similarity. We show that in both
tasks, our method outperforms the original color refinement with only moderate
increase in running time advancing the state of the art
Weisfeiler and Leman go Hyperbolic: Learning Distance Preserving Node Representations
In recent years, graph neural networks (GNNs) have emerged as a promising
tool for solving machine learning problems on graphs. Most GNNs are members of
the family of message passing neural networks (MPNNs). There is a close
connection between these models and the Weisfeiler-Leman (WL) test of
isomorphism, an algorithm that can successfully test isomorphism for a broad
class of graphs. Recently, much research has focused on measuring the
expressive power of GNNs. For instance, it has been shown that standard MPNNs
are at most as powerful as WL in terms of distinguishing non-isomorphic graphs.
However, these studies have largely ignored the distances between the
representations of nodes/graphs which are of paramount importance for learning
tasks. In this paper, we define a distance function between nodes which is
based on the hierarchy produced by the WL algorithm, and propose a model that
learns representations which preserve those distances between nodes. Since the
emerging hierarchy corresponds to a tree, to learn these representations, we
capitalize on recent advances in the field of hyperbolic neural networks. We
empirically evaluate the proposed model on standard node and graph
classification datasets where it achieves competitive performance with
state-of-the-art models
- …