8 research outputs found
Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support
Computer Vision, and hence Artificial Intelligence-based extraction of
information from images, has increasingly received attention over the last
years, for instance in medical diagnostics. While the algorithms' complexity is
a reason for their increased performance, it also leads to the "black box"
problem, consequently decreasing trust towards AI. In this regard, "Explainable
Artificial Intelligence" (XAI) allows to open that black box and to improve the
degree of AI transparency. In this paper, we first discuss the theoretical
impact of explainability on trust towards AI, followed by showcasing how the
usage of XAI in a health-related setting can look like. More specifically, we
show how XAI can be applied to understand why Computer Vision, based on deep
learning, did or did not detect a disease (malaria) on image data (thin blood
smear slide images). Furthermore, we investigate, how XAI can be used to
compare the detection strategy of two different deep learning models often used
for Computer Vision: Convolutional Neural Network and Multi-Layer Perceptron.
Our empirical results show that i) the AI sometimes used questionable or
irrelevant data features of an image to detect malaria (even if correctly
predicted), and ii) that there may be significant discrepancies in how
different deep learning models explain the same prediction. Our theoretical
discussion highlights that XAI can support trust in Computer Vision systems,
and AI systems in general, especially through an increased understandability
and predictability