971 research outputs found
Benchmarking Graph Neural Networks
Graph neural networks (GNNs) have become the standard toolkit for analyzing
and learning from data on graphs. As the field grows, it becomes critical to
identify key architectures and validate new ideas that generalize to larger,
more complex datasets. Unfortunately, it has been increasingly difficult to
gauge the effectiveness of new models in the absence of a standardized
benchmark with consistent experimental settings. In this paper, we introduce a
reproducible GNN benchmarking framework, with the facility for researchers to
add new models conveniently for arbitrary datasets. We demonstrate the
usefulness of our framework by presenting a principled investigation into the
recent Weisfeiler-Lehman GNNs (WL-GNNs) compared to message passing-based graph
convolutional networks (GCNs) for a variety of graph tasks, i.e. graph
regression/classification and node/link prediction, with medium-scale datasets.Comment: Benchmarking framework on GitHub at
https://github.com/graphdeeplearning/benchmarking-gnn
Explanation Uncertainty with Decision Boundary Awareness
Post-hoc explanation methods have become increasingly depended upon for
understanding black-box classifiers in high-stakes applications, precipitating
a need for reliable explanations. While numerous explanation methods have been
proposed, recent works have shown that many existing methods can be
inconsistent or unstable. In addition, high-performing classifiers are often
highly nonlinear and can exhibit complex behavior around the decision boundary,
leading to brittle or misleading local explanations. Therefore, there is an
impending need to quantify the uncertainty of such explanation methods in order
to understand when explanations are trustworthy. We introduce a novel
uncertainty quantification method parameterized by a Gaussian Process model,
which combines the uncertainty approximation of existing methods with a novel
geodesic-based similarity which captures the complexity of the target black-box
decision boundary. The proposed framework is highly flexible; it can be used
with any black-box classifier and feature attribution method to amortize
uncertainty estimates for explanations. We show theoretically that our proposed
geodesic-based kernel similarity increases with the complexity of the decision
boundary. Empirical results on multiple tabular and image datasets show that
our decision boundary-aware uncertainty estimate improves understanding of
explanations as compared to existing methods
- …