Visual Representation of Explainable Artificial Intelligence Methods: Design and Empirical Studies

Abstract

Explainability is increasingly considered a critical component of artificial intelligence (AI) systems, especially in high-stake domains where AI systems’ decisions can significantly impact individuals. As a result, there has been a surge of interest in explainable artificial intelligence (XAI) to increase the transparency of AI systems by explaining their decisions to end-users. In particular, extensive research has focused on developing “local model-agnostic” explainable methods that generate explanations of individual predictions for any predictive model. While these explanations can support end-users in the use of AI systems through increased transparency, three significant challenges have hindered their design, implementation, and large-scale adoption in real applications. First, there is a lack of understanding of how end-users evaluate explanations. There are many critiques that explanations are based on researchers’ intuition instead of end-users’ needs. Furthermore, there is insufficient evidence on whether end-users understand these explanations or trust XAI systems. Second, it is unclear which effect explanations have on trust when they disclose different biases on AI systems’ decisions. Prior research investigating biased decisions has found conflicting evidence on explanations’ effects. Explanations can either increase trust through perceived transparency or decrease trust as end-users perceive the system as biased. Moreover, it is unclear how contingency factors influence these opposing effects. Third, most XAI methods deliver static explanations that offer end-users limited information, resulting in an insufficient understanding of how AI systems make decisions and, in turn, lower trust. Furthermore, research has found that end-users perceive static explanations as not transparent enough, as these do not allow them to investigate the factors that influence a given decision. This dissertation addresses these challenges across three studies by focusing on the overarching research question of how to design visual representations of local model-agnostic XAI methods to increase end-users’ understanding and trust. The first challenge is addressed through an iterative design process that refines the representations of explanations from four well-established model-agnostic XAI methods and a subsequent evaluation with end-users using eye-tracking technology and interviews. Afterward, a research study that takes a psychological contract violation (PCV) theory and social identity theory perspective to investigate the contingency factors of the opposing effects of explanations on end-users’ trust addresses the second challenge. Specifically, this study investigates how end-users evaluate explanations of a gender-biased AI system while controlling for their awareness of gender discrimination in society. Finally, the third challenge is addressed through a design science research project to design an interactive XAI system for end-users to increase their understanding and trust. This dissertation makes several contributions to the ongoing research on improving the transparency of AI systems by explicitly emphasizing the end-user perspective on XAI. First, it contributes to practice by providing insights that help to improve the design of explanations of AI systems’ decisions. Additionally, this dissertation provides significant theoretical contributions by contextualizing the PCV theory to gender-biased XAI systems and the contingency factors that determine whether end-users experience a PCV. Moreover, it provides insights into how end-users cognitively evaluate explanations and extends the current understanding of the impact of explanations on trust. Finally, this dissertation contributes to the design knowledge of XAI systems by proposing guidelines for designing interactive XAI systems that give end-users more control over the information they receive to help them better understand how AI systems make decisions

    Similar works