10,755 research outputs found
Explainability in Graph Neural Networks: A Taxonomic Survey
Deep learning methods are achieving ever-increasing performance on many
artificial intelligence tasks. A major limitation of deep models is that they
are not amenable to interpretability. This limitation can be circumvented by
developing post hoc techniques to explain the predictions, giving rise to the
area of explainability. Recently, explainability of deep models on images and
texts has achieved significant progress. In the area of graph data, graph
neural networks (GNNs) and their explainability are experiencing rapid
developments. However, there is neither a unified treatment of GNN
explainability methods, nor a standard benchmark and testbed for evaluations.
In this survey, we provide a unified and taxonomic view of current GNN
explainability methods. Our unified and taxonomic treatments of this subject
shed lights on the commonalities and differences of existing methods and set
the stage for further methodological developments. To facilitate evaluations,
we generate a set of benchmark graph datasets specifically for GNN
explainability. We summarize current datasets and metrics for evaluating GNN
explainability. Altogether, this work provides a unified methodological
treatment of GNN explainability and a standardized testbed for evaluations
Interpretable Convolutional Neural Networks
This paper proposes a method to modify traditional convolutional neural
networks (CNNs) into interpretable CNNs, in order to clarify knowledge
representations in high conv-layers of CNNs. In an interpretable CNN, each
filter in a high conv-layer represents a certain object part. We do not need
any annotations of object parts or textures to supervise the learning process.
Instead, the interpretable CNN automatically assigns each filter in a high
conv-layer with an object part during the learning process. Our method can be
applied to different types of CNNs with different structures. The clear
knowledge representation in an interpretable CNN can help people understand the
logics inside a CNN, i.e., based on which patterns the CNN makes the decision.
Experiments showed that filters in an interpretable CNN were more semantically
meaningful than those in traditional CNNs.Comment: In this version, we release the website of the code. Compared to the
previous version, we have corrected all values of location instability in
Table 3--6 by dividing the values by sqrt(2), i.e., a=a/sqrt(2). Such
revisions do NOT decrease the significance of the superior performance of our
method, because we make the same correction to location-instability values of
all baseline
Feature Selection for Big Visual Data: Overview and Challenges
International Conference Image Analysis and Recognition (ICIAR 2018, Póvoa de Varzim, Portugal
- …