33 research outputs found
Requirements engineering for explainable systems
Information systems are ubiquitous in modern life and are powered by evermore complex algorithms that are often difficult to understand. Moreover, since systems are part of almost every aspect of human life, the quality in interaction and communication between humans and machines has become increasingly important. Hence the importance of explainability as an essential element of human-machine communication; it has also become an important quality requirement for modern information systems.
However, dealing with quality requirements has never been a trivial task. To develop quality systems, software professionals have to understand how to transform abstract quality goals into real-world information system solutions. Requirements engineering provides a structured approach that aids software professionals in better comprehending, evaluating, and operationalizing quality requirements. Explainability has recently regained prominence and been acknowledged and established as a quality requirement; however, there is currently no requirements engineering recommendations specifically focused on explainable systems.
To fill this gap, this thesis investigated explainability as a quality requirement and how it relates to the information systems context, with an emphasis on requirements engineering. To this end, this thesis proposes two theories that delineate the role of explainability and establish guidelines for the requirements engineering process of explainable systems. These theories are modeled and shaped through five artifacts. These theories and artifacts should help software professionals 1) to communicate and achieve a shared understanding of the concept of explainability; 2) to comprehend how explainability affects system quality and what role it plays; 3) in translating abstract quality goals into design and evaluation strategies; and 4) to shape the software development process for the development of explainable systems.
The theories and artifacts were built and evaluated through literature studies, workshops, interviews, and a case study. The findings show that the knowledge made available helps practitioners understand the idea of explainability better, facilitating the creation of explainable systems. These results suggest that the proposed theories and artifacts are plausible, practical, and serve as a strong starting point for further extensions and improvements in the search for high-quality explainable systems
Human-Interpretable Explanations for Black-Box Machine Learning Models: An Application to Fraud Detection
Machine Learning (ML) has been increasingly used to aid humans making high-stakes
decisions in a wide range of areas, from public policy to criminal justice, education,
healthcare, or financial services. However, it is very hard for humans to grasp the rationale
behind every ML model’s prediction, hindering trust in the system. The field
of Explainable Artificial Intelligence (XAI) emerged to tackle this problem, aiming to
research and develop methods to make those “black-boxes” more interpretable, but there
is still no major breakthrough. Additionally, the most popular explanation methods —
LIME and SHAP — produce very low-level feature attribution explanations, being of
limited usefulness to personas without any ML knowledge.
This work was developed at Feedzai, a fintech company that uses ML to prevent financial
crime. One of the main Feedzai products is a case management application used
by fraud analysts to review suspicious financial transactions flagged by the ML models.
Fraud analysts are domain experts trained to look for suspicious evidence in transactions
but they do not have ML knowledge, and consequently, current XAI methods do not
suit their information needs. To address this, we present JOEL, a neural network-based
framework to jointly learn a decision-making task and associated domain knowledge
explanations. JOEL is tailored to human-in-the-loop domain experts that lack deep technical
ML knowledge, providing high-level insights about the model’s predictions that
very much resemble the experts’ own reasoning. Moreover, by collecting the domain
feedback from a pool of certified experts (human teaching), we promote seamless and
better quality explanations. Lastly, we resort to semantic mappings between legacy expert
systems and domain taxonomies to automatically annotate a bootstrap training set, overcoming
the absence of concept-based human annotations. We validate JOEL empirically
on a real-world fraud detection dataset, at Feedzai. We show that JOEL can generalize
the explanations from the bootstrap dataset. Furthermore, obtained results indicate that
human teaching is able to further improve the explanations prediction quality.A Aprendizagem de Máquina (AM) tem sido cada vez mais utilizada para ajudar os
humanos a tomar decisões de alto risco numa vasta gama de áreas, desde polĂtica atĂ© Ă
justiça criminal, educação, saĂşde e serviços financeiros. PorĂ©m, Ă© muito difĂcil para os
humanos perceber a razão da decisão do modelo de AM, prejudicando assim a confiança
no sistema. O campo da Inteligência Artificial Explicável (IAE) surgiu para enfrentar
este problema, visando desenvolver métodos para tornar as “caixas-pretas” mais interpretáveis,
embora ainda sem grande avanço. Além disso, os métodos de explicação mais
populares — LIME and SHAP — produzem explicações de muito baixo nĂvel, sendo de
utilidade limitada para pessoas sem conhecimento de AM.
Este trabalho foi desenvolvido na Feedzai, a fintech que usa a AM para prevenir crimes
financeiros. Um dos produtos da Feedzai é uma aplicação de gestão de casos, usada por
analistas de fraude. Estes sĂŁo especialistas no domĂnio treinados para procurar evidĂŞncias
suspeitas em transações financeiras, contudo não tendo o conhecimento em AM, os
métodos de IAE atuais não satisfazem as suas necessidades de informação. Para resolver
isso, apresentamos JOEL, a framework baseada em rede neuronal para aprender conjuntamente
a tarefa de tomada de decisão e as explicações associadas. A JOEL é orientada
a especialistas de domĂnio que nĂŁo tĂŞm conhecimento tĂ©cnico profundo de AM, fornecendo
informações de alto nĂvel sobre as previsões do modelo, que muito se assemelham
ao raciocĂnio dos prĂłprios especialistas. Ademais, ao recolher o feedback de especialistas
certificados (ensino humano), promovemos explicações contĂnuas e de melhor qualidade.
Por último, recorremos a mapeamentos semânticos entre sistemas legados e taxonomias
de domĂnio para anotar automaticamente um conjunto de dados, superando a ausĂŞncia
de anotações humanas baseadas em conceitos. Validamos a JOEL empiricamente em um
conjunto de dados de detecção de fraude do mundo real, na Feedzai. Mostramos que a
JOEL pode generalizar as explicações aprendidas no conjunto de dados inicial e que o
ensino humano é capaz de melhorar a qualidade da previsão das explicações
Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C1011198) , (Institute for Information & communications Technology Planning & Evaluation) (IITP) grant funded by the Korea government (MSIT) under the ICT Creative Consilience Program (IITP-2021-2020-0-01821) , and AI Platform to Fully Adapt and Reflect Privacy-Policy Changes (No. 2022-0-00688).Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI mode Äľs decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.National Research Foundation of Korea
Ministry of Science, ICT & Future Planning, Republic of Korea
Ministry of Science & ICT (MSIT), Republic of Korea
2021R1A2C1011198Institute for Information amp; communications Technology Planning amp; Evaluation) (IITP) - Korea government (MSIT) under the ICT Creative Consilience Program
IITP-2021-2020-0-01821AI Platform to Fully Adapt and Reflect Privacy-Policy Changes2022-0-0068
A Survey of Explainable AI and Proposal for a Discipline of Explanation Engineering
In this survey paper, we deep dive into the field of Explainable Artificial
Intelligence (XAI). After introducing the scope of this paper, we start by
discussing what an "explanation" really is. We then move on to discuss some of
the existing approaches to XAI and build a taxonomy of the most popular
methods. Next, we also look at a few applications of these and other XAI
techniques in four primary domains: finance, autonomous driving, healthcare and
manufacturing. We end by introducing a promising discipline, "Explanation
Engineering," which includes a systematic approach for designing explainability
into AI systems
An Exploration of Visual Analytic Techniques for XAI: Applications in Clinical Decision Support
Artificial Intelligence (AI) systems exhibit considerable potential in providing decision support across various domains. In this context, the methodology of eXplainable AI (XAI) becomes crucial, as it aims to enhance the transparency and comprehensibility of AI models\u27 decision-making processes. However, after a review of XAI methods and their application in clinical decision support, there exist notable gaps within the XAI methodology, particularly concerning the effective communication of explanations to users.
This thesis aims to bridge these existing gaps by presenting in Chapter 3 a framework designed to communicate AI-generated explanations effectively to end-users. This is particularly pertinent in fields like healthcare, where the successful implementation of AI decision support hinges on the ability to convey actionable insights to medical professionals.
Building upon this framework, subsequent chapters illustrate how visualization and visual analytics can be used with XAI in the context of clinical decision support. Chapter 4 introduces a visual analytic tool designed for ranking and triaging patients in the intensive care unit (ICU). Leveraging various XAI methods, the tool enables healthcare professionals to understand how the ranking model functions and how individual patients are prioritized. Through interactivity, users can explore influencing factors, evaluate alternate scenarios, and make informed decisions for optimal patient care.
The pivotal role of transparency and comprehensibility within machine learning models is explored in Chapter 5. Leveraging the power of explainable AI techniques and visualization, it investigates the factors contributing to model performance and errors. Furthermore, it investigates scenarios in which the model outperforms, ultimately fostering user trust by shedding light on the model\u27s strengths and capabilities.
Recognizing the ethical concerns associated with predictive models in health, Chapter 6 addresses potential bias and discrimination in ranking systems. By using the proposed visual analytic tool, users can assess the fairness and equity of the system, promoting equal treatment. This research emphasizes the need for unbiased decision-making in healthcare.
Having developed the framework and illustrated ways of combining XAI with visual analytics in the service of clinical decision support, the thesis concludes by identifying important future directions of research in this area
xxAI - Beyond Explainable AI
This is an open access book.
Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans.
Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed.
After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp
xxAI - Beyond Explainable AI
This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science
Visual Representation of Explainable Artificial Intelligence Methods: Design and Empirical Studies
Explainability is increasingly considered a critical component of artificial intelligence (AI) systems, especially in high-stake domains where AI systems’ decisions can significantly impact individuals. As a result, there has been a surge of interest in explainable artificial intelligence (XAI) to increase the transparency of AI systems by explaining their decisions to end-users. In particular, extensive research has focused on developing “local model-agnostic” explainable methods that generate explanations of individual predictions for any predictive model. While these explanations can support end-users in the use of AI systems through increased transparency, three significant challenges have hindered their design, implementation, and large-scale adoption in real applications.
First, there is a lack of understanding of how end-users evaluate explanations. There are many critiques that explanations are based on researchers’ intuition instead of end-users’ needs. Furthermore, there is insufficient evidence on whether end-users understand these explanations or trust XAI systems. Second, it is unclear which effect explanations have on trust when they disclose different biases on AI systems’ decisions. Prior research investigating biased decisions has found conflicting evidence on explanations’ effects. Explanations can either increase trust through perceived transparency or decrease trust as end-users perceive the system as biased. Moreover, it is unclear how contingency factors influence these opposing effects. Third, most XAI methods deliver static explanations that offer end-users limited information, resulting in an insufficient understanding of how AI systems make decisions and, in turn, lower trust. Furthermore, research has found that end-users perceive static explanations as not transparent enough, as these do not allow them to investigate the factors that influence a given decision.
This dissertation addresses these challenges across three studies by focusing on the overarching research question of how to design visual representations of local model-agnostic XAI methods to increase end-users’ understanding and trust. The first challenge is addressed through an iterative design process that refines the representations of explanations from four well-established model-agnostic XAI methods and a subsequent evaluation with end-users using eye-tracking technology and interviews. Afterward, a research study that takes a psychological contract violation (PCV) theory and social identity theory perspective to investigate the contingency factors of the opposing effects of explanations on end-users’ trust addresses the second challenge. Specifically, this study investigates how end-users evaluate explanations of a gender-biased AI system while controlling for their awareness of gender discrimination in society. Finally, the third challenge is addressed through a design science research project to design an interactive XAI system for end-users to increase their understanding and trust.
This dissertation makes several contributions to the ongoing research on improving the transparency of AI systems by explicitly emphasizing the end-user perspective on XAI. First, it contributes to practice by providing insights that help to improve the design of explanations of AI systems’ decisions. Additionally, this dissertation provides significant theoretical contributions by contextualizing the PCV theory to gender-biased XAI systems and the contingency factors that determine whether end-users experience a PCV. Moreover, it provides insights into how end-users cognitively evaluate explanations and extends the current understanding of the impact of explanations on trust. Finally, this dissertation contributes to the design knowledge of XAI systems by proposing guidelines for designing interactive XAI systems that give end-users more control over the information they receive to help them better understand how AI systems make decisions
xxAI - Beyond Explainable AI
This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science