Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit
notable dissimilarity compared to the majority in a collection. However,
current works primarily focus on evaluating graph-level abnormality while
failing to provide meaningful explanations for the predictions, which largely
limits their reliability and application scope. In this paper, we investigate a
new challenging problem, explainable GLAD, where the learning objective is to
predict the abnormality of each graph sample with corresponding explanations,
i.e., the vital subgraph that leads to the predictions. To address this
challenging problem, we propose a Self-Interpretable Graph aNomaly dETection
model (SIGNET for short) that detects anomalous graphs as well as generates
informative explanations simultaneously. Specifically, we first introduce the
multi-view subgraph information bottleneck (MSIB) framework, serving as the
design basis of our self-interpretable GLAD approach. This way SIGNET is able
to not only measure the abnormality of each graph based on cross-view mutual
information but also provide informative graph rationales by extracting
bottleneck subgraphs from the input graph and its dual hypergraph in a
self-supervised way. Extensive experiments on 16 datasets demonstrate the
anomaly detection capability and self-interpretability of SIGNET.Comment: 23 pages; accepted to NeurIPS 202