7,334 research outputs found
Foundations of Explainable Knowledge-Enabled Systems
Explainability has been an important goal since the early days of Artificial
Intelligence. Several approaches for producing explanations have been
developed. However, many of these approaches were tightly coupled with the
capabilities of the artificial intelligence systems at the time. With the
proliferation of AI-enabled systems in sometimes critical settings, there is a
need for them to be explainable to end-users and decision-makers. We present a
historical overview of explainable artificial intelligence systems, with a
focus on knowledge-enabled systems, spanning the expert systems, cognitive
assistants, semantic applications, and machine learning domains. Additionally,
borrowing from the strengths of past approaches and identifying gaps needed to
make explanations user- and context-focused, we propose new definitions for
explanations and explainable knowledge-enabled systems.Comment: S. Chari, D. Gruen, O. Seneviratne, D. L. McGuinness, "Foundations of
Explainable Knowledge-Enabled Systems". In: Ilaria Tiddi, Freddy Lecue,
Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI -- Foundations,
Applications and Challenges. Studies on the Semantic Web, IOS Press,
Amsterdam, 2020, to appea
Directions for Explainable Knowledge-Enabled Systems
Interest in the field of Explainable Artificial Intelligence has been growing
for decades and has accelerated recently. As Artificial Intelligence models
have become more complex, and often more opaque, with the incorporation of
complex machine learning techniques, explainability has become more critical.
Recently, researchers have been investigating and tackling explainability with
a user-centric focus, looking for explanations to consider trustworthiness,
comprehensibility, explicit provenance, and context-awareness. In this chapter,
we leverage our survey of explanation literature in Artificial Intelligence and
closely related fields and use these past efforts to generate a set of
explanation types that we feel reflect the expanded needs of explanation for
today's artificial intelligence applications. We define each type and provide
an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in
their generation and prioritization of requirements and further help generate
explanations that are better aligned to users' and situational needs.Comment: S. Chari, D. M. Gruen, O. Seneviratne, D. L. McGuinness, "Directions
for Explainable Knowledge-Enabled Systems". In: Ilaria Tiddi, Freddy Lecue,
Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI -- Foundations,
Applications and Challenges. Studies on the Semantic Web, IOS Press,
Amsterdam, 2020, to appea
Explanation in Artificial Intelligence: Insights from the Social Sciences
There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI
This is an integrative review that address the question, "What makes for a
good explanation?" with reference to AI systems. Pertinent literatures are
vast. Thus, this review is necessarily selective. That said, most of the key
concepts and issues are expressed in this Report. The Report encapsulates the
history of computer science efforts to create systems that explain and instruct
(intelligent tutoring systems and expert systems). The Report expresses the
explainability issues and challenges in modern AI, and presents capsule views
of the leading psychological theories of explanation. Certain articles stand
out by virtue of their particular relevance to XAI, and their methods, results,
and key points are highlighted. It is recommended that AI/XAI researchers be
encouraged to include in their research reports fuller details on their
empirical or experimental methods, in the fashion of experimental psychology
research reports: details on Participants, Instructions, Procedures, Tasks,
Dependent Variables (operational definitions of the measures and metrics),
Independent Variables (conditions), and Control Conditions
Explanation Ontology: A Model of Explanations for User-Centered AI
Explainability has been a goal for Artificial Intelligence (AI) systems since
their conception, with the need for explainability growing as more complex AI
models are increasingly used in critical, high-stakes settings such as
healthcare. Explanations have often added to an AI system in a non-principled,
post-hoc manner. With greater adoption of these systems and emphasis on
user-centric explainability, there is a need for a structured representation
that treats explainability as a primary consideration, mapping end user needs
to specific explanation types and the system's AI capabilities. We design an
explanation ontology to model both the role of explanations, accounting for the
system and user attributes in the process, and the range of different
literature-derived explanation types. We indicate how the ontology can support
user requirements for explanations in the domain of healthcare. We evaluate our
ontology with a set of competency questions geared towards a system designer
who might use our ontology to decide which explanation types to include, given
a combination of users' needs and a system's capabilities, both in system
design settings and in real-time operations. Through the use of this ontology,
system designers will be able to make informed choices on which explanations AI
systems can and should provide.Comment: 16 pages (but 1 reference over on arxiv), 5 tables, 3 code listings,
1 figur
Hows and Whys of Artificial Intelligence for Public Sector Decisions: Explanation and Evaluation
Evaluation has always been a key challenge in the development of artificial
intelligence (AI) based software, due to the technical complexity of the
software artifact and, often, its embedding in complex sociotechnical
processes. Recent advances in machine learning (ML) enabled by deep neural
networks has exacerbated the challenge of evaluating such software due to the
opaque nature of these ML-based artifacts. A key related issue is the
(in)ability of such systems to generate useful explanations of their outputs,
and we argue that the explanation and evaluation problems are closely linked.
The paper models the elements of a ML-based AI system in the context of public
sector decision (PSD) applications involving both artificial and human
intelligence, and maps these elements against issues in both evaluation and
explanation, showing how the two are related. We consider a number of common
PSD application patterns in the light of our model, and identify a set of key
issues connected to explanation and evaluation in each case. Finally, we
propose multiple strategies to promote wider adoption of AI/ML technologies in
PSD, where each is distinguished by a focus on different elements of our model,
allowing PSD policy makers to adopt an approach that best fits their context
and concerns.Comment: Presented at AAAI FSS-18: Artificial Intelligence in Government and
Public Sector, Arlington, Virginia, USA; corrected typos in this versio
How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games
How should an AI-based explanation system explain an agent's complex behavior
to ordinary end users who have no background in AI? Answering this question is
an active research area, for if an AI-based explanation system could
effectively explain intelligent agents' behavior, it could enable the end users
to understand, assess, and appropriately trust (or distrust) the agents
attempting to help them. To provide insights into this question, we turned to
human expert explainers in the real-time strategy domain, "shoutcaster", to
understand (1) how they foraged in an evolving strategy game in real time, (2)
how they assessed the players' behaviors, and (3) how they constructed
pertinent and timely explanations out of their insights and delivered them to
their audience. The results provided insights into shoutcasters' foraging
strategies for gleaning information necessary to assess and explain the
players; a characterization of the types of implicit questions shoutcasters
answered; and implications for creating explanations by using the patternsComment: 12 pages, 11 figures, submitted to CHI 201
Contrastive Fairness in Machine Learning
Was it fair that Harry was hired but not Barry? Was it fair that Pam was
fired instead of Sam? How can one ensure fairness when an intelligent algorithm
takes these decisions instead of a human? How can one ensure that the decisions
were taken based on merit and not on protected attributes like race or sex?
These are the questions that must be answered now that many decisions in real
life can be made through machine learning. However research in fairness of
algorithms has focused on the counterfactual questions "what if?" or "why?",
whereas in real life most subjective questions of consequence are contrastive:
"why this but not that?". We introduce concepts and mathematical tools using
causal inference to address contrastive fairness in algorithmic decision-making
with illustrative examples
Toward Personalized XAI: A Case Study in Intelligent Tutoring Systems
Our research is a step toward ascertaining the need for personalization, in
XAI, and we do so in the context of investigating the value of explanations of
AI-driven hints and feedback are useful in Intelligent Tutoring Systems (ITS).
We added an explanation functionality for the adaptive hints provided by the
Adaptive CSP (ACSP) applet, an interactive simulation that helps students learn
an algorithm for constraint satisfaction problems by providing AI-driven hints
adapted to their predicted level of learning. We present the design of the
explanation functionality and the results of a controlled study to evaluate its
impact on students' learning and perception of the ACPS hints. The study
includes an analysis of how these outcomes are modulated by several user
characteristics such as personality traits and cognitive abilities, to asses if
explanations should be personalized to these characteristics. Our results
indicate that providing explanations increase students' trust in the ACPS
hints, perceived usefulness of the hints, and intention to use them again. In
addition, we show that students' access of the explanation and learning gains
are modulated by user characteristics, providing insights toward designing
personalized Explainable AI (XAI) for ITS
"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users
Although interactive learning puts the user into the loop, the learner
remains mostly a black box for the user. Understanding the reasons behind
queries and predictions is important when assessing how the learner works and,
in turn, trust. Consequently, we propose the novel framework of explanatory
interactive learning: in each step, the learner explains its interactive query
to the user, and she queries of any active classifier for visualizing
explanations of the corresponding predictions. We demonstrate that this can
boost the predictive and explanatory powers of and the trust into the learned
model, using text (e.g. SVMs) and image classification (e.g. neural networks)
experiments as well as a user study.Comment: Submitted to NIPS 201
- …