3 research outputs found
Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning
Using machine learning in high-stakes applications often requires predictions
to be accompanied by explanations comprehensible to the domain user, who has
ultimate responsibility for decisions and outcomes. Recently, a new framework
for providing explanations, called TED, has been proposed to provide meaningful
explanations for predictions. This framework augments training data to include
explanations elicited from domain users, in addition to features and labels.
This approach ensures that explanations for predictions are tailored to the
complexity expectations and domain knowledge of the consumer. In this paper, we
build on this foundational work, by exploring more sophisticated instantiations
of the TED framework and empirically evaluate their effectiveness in two
diverse domains, chemical odor and skin cancer prediction. Results demonstrate
that meaningful explanations can be reliably taught to machine learning
algorithms, and in some cases, improving modeling accuracy.Comment: presented at 2019 ICML Workshop on Human in the Loop Learning (HILL
2019), Long Beach, USA. arXiv admin note: substantial text overlap with
arXiv:1805.1164
Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Many proposed methods for explaining machine learning predictions are in fact
challenging to understand for nontechnical consumers. This paper builds upon an
alternative consumer-driven approach called TED that asks for explanations to
be provided in training data, along with target labels. Using semi-synthetic
data from credit approval and employee retention applications, experiments are
conducted to investigate some practical considerations with TED, including its
performance with different classification algorithms, varying numbers of
explanations, and variability in explanations. A new algorithm is proposed to
handle the case where some training examples do not have explanations. Our
results show that TED is robust to increasing numbers of explanations, noisy
explanations, and large fractions of missing explanations, thus making advances
toward its practical deployment
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Explainable machine learning and artificial intelligence models have been
used to justify a model's decision-making process. This added transparency aims
to help improve user performance and understanding of the underlying model.
However, in practice, explainable systems face many open questions and
challenges. Specifically, designers might reduce the complexity of deep
learning models in order to provide interpretability. The explanations
generated by these simplified models, however, might not accurately justify and
be truthful to the model. This can further add confusion to the users as they
might not find the explanations meaningful with respect to the model
predictions. Understanding how these explanations affect user behavior is an
ongoing challenge. In this paper, we explore how explanation veracity affects
user performance and agreement in intelligent systems. Through a controlled
user study with an explainable activity recognition system, we compare
variations in explanation veracity for a video review and querying task. The
results suggest that low veracity explanations significantly decrease user
performance and agreement compared to both accurate explanations and a system
without explanations. These findings demonstrate the importance of accurate and
understandable explanations and caution that poor explanations can sometimes be
worse than no explanations with respect to their effect on user performance and
reliance on an AI system