1 research outputs found
Unifying Model Explainability and Robustness via Machine-Checkable Concepts
As deep neural networks (DNNs) get adopted in an ever-increasing number of
applications, explainability has emerged as a crucial desideratum for these
models. In many real-world tasks, one of the principal reasons for requiring
explainability is to in turn assess prediction robustness, where predictions
(i.e., class labels) that do not conform to their respective explanations
(e.g., presence or absence of a concept in the input) are deemed to be
unreliable. However, most, if not all, prior methods for checking
explanation-conformity (e.g., LIME, TCAV, saliency maps) require significant
manual intervention, which hinders their large-scale deployability. In this
paper, we propose a robustness-assessment framework, at the core of which is
the idea of using machine-checkable concepts. Our framework defines a large
number of concepts that the DNN explanations could be based on and performs the
explanation-conformity check at test time to assess prediction robustness. Both
steps are executed in an automated manner without requiring any human
intervention and are easily scaled to datasets with a very large number of
classes. Experiments on real-world datasets and human surveys show that our
framework is able to enhance prediction robustness significantly: the
predictions marked to be robust by our framework have significantly higher
accuracy and are more robust to adversarial perturbations.Comment: 22 pages, 12 figures, 11 table