537 research outputs found
bLIMEy:Surrogate Prediction Explanations Beyond LIME
Surrogate explainers of black-box machine learning predictions are of
paramount importance in the field of eXplainable Artificial Intelligence since
they can be applied to any type of data (images, text and tabular), are
model-agnostic and are post-hoc (i.e., can be retrofitted). The Local
Interpretable Model-agnostic Explanations (LIME) algorithm is often mistakenly
unified with a more general framework of surrogate explainers, which may lead
to a belief that it is the solution to surrogate explainability. In this paper
we empower the community to "build LIME yourself" (bLIMEy) by proposing a
principled algorithmic framework for building custom local surrogate explainers
of black-box model predictions, including LIME itself. To this end, we
demonstrate how to decompose the surrogate explainers family into
algorithmically independent and interoperable modules and discuss the influence
of these component choices on the functional capabilities of the resulting
explainer, using the example of LIME.Comment: 2019 Workshop on Human-Centric Machine Learning (HCML 2019); 33rd
Conference on Neural Information Processing Systems (NeurIPS 2019),
Vancouver, Canad
CoMEt: x86 Cost Model Explanation Framework
ML-based program cost models have been shown to yield highly accurate
predictions. They have the capability to replace heavily-engineered analytical
program cost models in mainstream compilers, but their black-box nature
discourages their adoption. In this work, we propose the first method for
obtaining faithful and intuitive explanations for the throughput predictions
made by ML-based cost models. We demonstrate our explanations for the
state-of-the-art ML-based cost model, Ithemal. We compare the explanations for
Ithemal with the explanations for a hand-crafted, accurate analytical model,
uiCA. Our empirical findings show that high similarity between explanations for
Ithemal and uiCA usually corresponds to high similarity between their
predictions
SpArX: Sparse Argumentative Explanations for Neural Networks
Neural networks (NNs) have various applications in AI, but explaining their
decision process remains challenging. Existing approaches often focus on
explaining how changing individual inputs affects NNs' outputs. However, an
explanation that is consistent with the input-output behaviour of an NN is not
necessarily faithful to the actual mechanics thereof. In this paper, we exploit
relationships between multi-layer perceptrons (MLPs) and quantitative
argumentation frameworks (QAFs) to create argumentative explanations for the
mechanics of MLPs. Our SpArX method first sparsifies the MLP while maintaining
as much of the original mechanics as possible. It then translates the sparse
MLP into an equivalent QAF to shed light on the underlying decision process of
the MLP, producing global and/or local explanations. We demonstrate
experimentally that SpArX can give more faithful explanations than existing
approaches, while simultaneously providing deeper insights into the actual
reasoning process of MLPs
AI for Explaining Decisions in Multi-Agent Environments
Explanation is necessary for humans to understand and accept decisions made
by an AI system when the system's goal is known. It is even more important when
the AI system makes decisions in multi-agent environments where the human does
not know the systems' goals since they may depend on other agents' preferences.
In such situations, explanations should aim to increase user satisfaction,
taking into account the system's decision, the user's and the other agents'
preferences, the environment settings and properties such as fairness, envy and
privacy. Generating explanations that will increase user satisfaction is very
challenging; to this end, we propose a new research direction: xMASE. We then
review the state of the art and discuss research directions towards efficient
methodologies and algorithms for generating explanations that will increase
users' satisfaction from AI system's decisions in multi-agent environments.Comment: This paper has been submitted to the Blue Sky Track of the AAAI 2020
conference. At the time of submission, it is under review. The tentative
notification date will be November 10, 2019. Current version: Name of first
author had been added in metadat
Conceptual challenges for interpretable machine learning
As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that are largely overlooked by authors in this area. I argue that the vast majority of IML algorithms are plagued by (1) ambiguity with respect to their true target; (2) a disregard for error rates and severe testing; and (3) an emphasis on product over process. Each point is developed at length, drawing on relevant debates in epistemology and philosophy of science. Examples and counterexamples from IML are considered, demonstrating how failure to acknowledge these problems can result in counterintuitive and potentially misleading explanations. Without greater care for the conceptual foundations of IML, future work in this area is doomed to repeat the same mistakes
Exploring Interpretability for Predictive Process Analytics
Modern predictive analytics underpinned by machine learning techniques has
become a key enabler to the automation of data-driven decision making. In the
context of business process management, predictive analytics has been applied
to making predictions about the future state of an ongoing business process
instance, for example, when will the process instance complete and what will be
the outcome upon completion. Machine learning models can be trained on event
log data recording historical process execution to build the underlying
predictive models. Multiple techniques have been proposed so far which encode
the information available in an event log and construct input features required
to train a predictive model. While accuracy has been a dominant criterion in
the choice of various techniques, they are often applied as a black-box in
building predictive models. In this paper, we derive explanations using
interpretable machine learning techniques to compare and contrast the
suitability of multiple predictive models of high accuracy. The explanations
allow us to gain an understanding of the underlying reasons for a prediction
and highlight scenarios where accuracy alone may not be sufficient in assessing
the suitability of techniques used to encode event log data to features used by
a predictive model. Findings from this study motivate the need and importance
to incorporate interpretability in predictive process analytics.Comment: 15 pages, 7 figure
- …