163 research outputs found
A Survey on Graph Kernels
Graph kernels have become an established and widely-used technique for
solving classification tasks on graphs. This survey gives a comprehensive
overview of techniques for kernel-based graph classification developed in the
past 15 years. We describe and categorize graph kernels based on properties
inherent to their design, such as the nature of their extracted graph features,
their method of computation and their applicability to problems in practice. In
an extensive experimental evaluation, we study the classification accuracy of a
large suite of graph kernels on established benchmarks as well as new datasets.
We compare the performance of popular kernels with several baseline methods and
study the effect of applying a Gaussian RBF kernel to the metric induced by a
graph kernel. In doing so, we find that simple baselines become competitive
after this transformation on some datasets. Moreover, we study the extent to
which existing graph kernels agree in their predictions (and prediction errors)
and obtain a data-driven categorization of kernels as result. Finally, based on
our experimental results, we derive a practitioner's guide to kernel-based
graph classification
Gradual Weisfeiler-Leman: Slow and Steady Wins the Race
The classical Weisfeiler-Leman algorithm aka color refinement is fundamental
for graph learning and central for successful graph kernels and graph neural
networks. Originally developed for graph isomorphism testing, the algorithm
iteratively refines vertex colors. On many datasets, the stable coloring is
reached after a few iterations and the optimal number of iterations for machine
learning tasks is typically even lower. This suggests that the colors diverge
too fast, defining a similarity that is too coarse. We generalize the concept
of color refinement and propose a framework for gradual neighborhood
refinement, which allows a slower convergence to the stable coloring and thus
provides a more fine-grained refinement hierarchy and vertex similarity. We
assign new colors by clustering vertex neighborhoods, replacing the original
injective color assignment function. Our approach is used to derive new
variants of existing graph kernels and to approximate the graph edit distance
via optimal assignments regarding vertex similarity. We show that in both
tasks, our method outperforms the original color refinement with only moderate
increase in running time advancing the state of the art
Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels
Current neural architecture search (NAS) strategies focus only on finding a
single, good, architecture. They offer little insight into why a specific
network is performing well, or how we should modify the architecture if we want
further improvements. We propose a Bayesian optimisation (BO) approach for NAS
that combines the Weisfeiler-Lehman graph kernel with a Gaussian process
surrogate. Our method optimises the architecture in a highly data-efficient
manner: it is capable of capturing the topological structures of the
architectures and is scalable to large graphs, thus making the high-dimensional
and graph-like search spaces amenable to BO. More importantly, our method
affords interpretability by discovering useful network features and their
corresponding impact on the network performance. Indeed, we demonstrate
empirically that our surrogate model is capable of identifying useful motifs
which can guide the generation of new architectures. We finally show that our
method outperforms existing NAS approaches to achieve the state of the art on
both closed- and open-domain search spaces.Comment: ICLR 2021. 9 pages, 5 figures, 1 table (23 pages, 14 figures and 3
tables including references and appendices
- …