Graph Neural Networks (GNNs) are widely used in many modern applications,
necessitating explanations for their decisions. However, the complexity of GNNs
makes it difficult to explain predictions. Even though several methods have
been proposed lately, they can only provide simple and static explanations,
which are difficult for users to understand in many scenarios. Therefore, we
introduce INGREX, an interactive explanation framework for GNNs designed to aid
users in comprehending model predictions. Our framework is implemented based on
multiple explanation algorithms and advanced libraries. We demonstrate our
framework in three scenarios covering common demands for GNN explanations to
present its effectiveness and helpfulness.Comment: 4 pages, 5 figures, This paper is under review for IEEE ICDE 202