5 research outputs found
Test Automation with Grad-CAM Heatmaps -- A Future Pipe Segment in MLOps for Vision AI?
Machine Learning (ML) is a fundamental part of modern perception systems. In
the last decade, the performance of computer vision using trained deep neural
networks has outperformed previous approaches based on careful feature
engineering. However, the opaqueness of large ML models is a substantial
impediment for critical applications such as in the automotive context. As a
remedy, Gradient-weighted Class Activation Mapping (Grad-CAM) has been proposed
to provide visual explanations of model internals. In this paper, we
demonstrate how Grad-CAM heatmaps can be used to increase the explainability of
an image recognition model trained for a pedestrian underpass. We argue how the
heatmaps support compliance to the EU's seven key requirements for Trustworthy
AI. Finally, we propose adding automated heatmap analysis as a pipe segment in
an MLOps pipeline. We believe that such a building block can be used to
automatically detect if a trained ML-model is activated based on invalid pixels
in test images, suggesting biased models.Comment: Accepted for publication in the Proc. of the 1st International
Workshop on DevOps Testing for Cyber-Physical System
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
As two sides of the same coin, causality and explainable artificial
intelligence (xAI) were initially proposed and developed with different goals.
However, the latter can only be complete when seen through the lens of the
causality framework. As such, we propose a novel causality-inspired framework
for xAI that creates an environment for the development of xAI approaches. To
show its applicability, biometrics was used as case study. For this, we have
analysed 81 research papers on a myriad of biometric modalities and different
tasks. We have categorised each of these methods according to our novel xAI
Ladder and discussed the future directions of the field