2,133 research outputs found
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Interpretable and explainable machine learning has seen a recent surge of
interest. We focus on safety as a key motivation behind the surge and make the
relationship between interpretability and safety more quantitative. Toward
assessing safety, we introduce the concept of maximum deviation via an
optimization problem to find the largest deviation of a supervised learning
model from a reference model regarded as safe. We then show how
interpretability facilitates this safety assessment. For models including
decision trees, generalized linear and additive models, the maximum deviation
can be computed exactly and efficiently. For tree ensembles, which are not
regarded as interpretable, discrete optimization techniques can still provide
informative bounds. For a broader class of piecewise Lipschitz functions, we
leverage the multi-armed bandit literature to show that interpretability
produces tighter (regret) bounds on the maximum deviation. We present case
studies, including one on mortgage approval, to illustrate our methods and the
insights about models that may be obtained from deviation maximization.Comment: Published at NeurIPS 202
Local Rule-Based Explanations of Black Box Decision Systems
The recent years have witnessed the rise of accurate but obscure decision
systems which hide the logic of their internal decision processes to the users.
The lack of explanations for the decisions of black box systems is a key
ethical issue, and a limitation to the adoption of machine learning components
in socially sensitive and safety-critical contexts. %Therefore, we need
explanations that reveals the reasons why a predictor takes a certain decision.
In this paper we focus on the problem of black box outcome explanation, i.e.,
explaining the reasons of the decision taken on a specific instance. We propose
LORE, an agnostic method able to provide interpretable and faithful
explanations. LORE first leans a local interpretable predictor on a synthetic
neighborhood generated by a genetic algorithm. Then it derives from the logic
of the local interpretable predictor a meaningful explanation consisting of: a
decision rule, which explains the reasons of the decision; and a set of
counterfactual rules, suggesting the changes in the instance's features that
lead to a different outcome. Wide experiments show that LORE outperforms
existing methods and baselines both in the quality of explanations and in the
accuracy in mimicking the black box
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed
appropriately, may deliver the best of expectations over many application sectors across the field. For this
to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability,
an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural
Networks) that were not present in the last hype of AI (namely, expert systems and rule based models).
Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely
acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in
this article examines the existing literature and contributions already done in the field of XAI, including a
prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define
explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that
covers such prior conceptual propositions with a major focus on the audience for which the explainability
is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions
related to the explainability of different Machine Learning models, including those aimed at explaining
Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This
critical literature analysis serves as the motivating background for a series of challenges faced by XAI,
such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept
of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI
methods in real organizations with fairness, model explainability and accountability at its core. Our
ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve
as reference material in order to stimulate future research advances, but also to encourage experts and
professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any
prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561
LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees
Systems based on artificial intelligence and machine learning models should
be transparent, in the sense of being capable of explaining their decisions to
gain humans' approval and trust. While there are a number of explainability
techniques that can be used to this end, many of them are only capable of
outputting a single one-size-fits-all explanation that simply cannot address
all of the explainees' diverse needs. In this work we introduce a
model-agnostic and post-hoc local explainability technique for black-box
predictions called LIMEtree, which employs surrogate multi-output regression
trees. We validate our algorithm on a deep neural network trained for object
detection in images and compare it against Local Interpretable Model-agnostic
Explanations (LIME). Our method comes with local fidelity guarantees and can
produce a range of diverse explanation types, including contrastive and
counterfactual explanations praised in the literature. Some of these
explanations can be interactively personalised to create bespoke, meaningful
and actionable insights into the model's behaviour. While other methods may
give an illusion of customisability by wrapping, otherwise static, explanations
in an interactive interface, our explanations are truly interactive, in the
sense of allowing the user to "interrogate" a black-box model. LIMEtree can
therefore produce consistent explanations on which an interactive exploratory
process can be built
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability
A Survey Of Methods For Explaining Black Box Models
In the last years many accurate decision support systems have been
constructed as black boxes, that is as systems that hide their internal logic
to the user. This lack of explanation constitutes both a practical and an
ethical issue. The literature reports many approaches aimed at overcoming this
crucial weakness sometimes at the cost of scarifying accuracy for
interpretability. The applications in which black box decision systems can be
used are various, and each approach is typically developed to provide a
solution for a specific problem and, as a consequence, delineating explicitly
or implicitly its own definition of interpretability and explanation. The aim
of this paper is to provide a classification of the main problems addressed in
the literature with respect to the notion of explanation and the type of black
box system. Given a problem definition, a black box type, and a desired
explanation this survey should help the researcher to find the proposals more
useful for his own work. The proposed classification of approaches to open
black box models should also be useful for putting the many research open
questions in perspective.Comment: This work is currently under review on an international journa
Article Search Tool and Topic Classifier
This thesis focuses on 3 main tasks related to Document Recommendations. The first approach deals with applying existing techniques on Document Recommendations using Doc2Vec.
A robust representation of the same is presented to understand how noise induced in the embedding space affects predictions of the recommendations. The next phase focuses on improving the above recommendations using a Topic Classifier. A Hierarchical Attention Network is employed for this purpose. In order to increase the accuracy of prediction, this work establishes a relation to embedding size of the words in the article. In the last phase, model-agnostic Explainable AI (XAI) techniques are implemented to prove the findings in this thesis. XAI techniques are also employed to show how we can fine tune model hyper-parameters for a black-box model
- …