Recently, Graph Neural Networks (GNNs) have significantly advanced the
performance of machine learning tasks on graphs. However, this technological
breakthrough makes people wonder: how does a GNN make such decisions, and can
we trust its prediction with high confidence? When it comes to some critical
fields, such as biomedicine, where making wrong decisions can have severe
consequences, it is crucial to interpret the inner working mechanisms of GNNs
before applying them. In this paper, we propose a model-agnostic model-level
explanation method for different GNNs that follow the message passing scheme,
GNNInterpreter, to explain the high-level decision-making process of the GNN
model. More specifically, GNNInterpreter learns a probabilistic generative
graph distribution that produces the most discriminative graph pattern the GNN
tries to detect when making a certain prediction by optimizing a novel
objective function specifically designed for the model-level explanation for
GNNs. Compared with the existing work, GNNInterpreter is more computationally
efficient and more flexible in generating explanation graphs with different
types of node features and edge features, without introducing another blackbox
to explain the GNN and without requiring manually specified domain-specific
knowledge. Additionally, the experimental studies conducted on four different
datasets demonstrate that the explanation graph generated by GNNInterpreter can
match the desired graph pattern when the model is ideal and reveal potential
model pitfalls if there exist any