10 research outputs found
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability
XNAP: Making LSTM-based Next Activity Predictions Explainable by Using LRP
Predictive business process monitoring (PBPM) is a class of techniques
designed to predict behaviour, such as next activities, in running traces. PBPM
techniques aim to improve process performance by providing predictions to
process analysts, supporting them in their decision making. However, the PBPM
techniques` limited predictive quality was considered as the essential obstacle
for establishing such techniques in practice. With the use of deep neural
networks (DNNs), the techniques` predictive quality could be improved for tasks
like the next activity prediction. While DNNs achieve a promising predictive
quality, they still lack comprehensibility due to their hierarchical approach
of learning representations. Nevertheless, process analysts need to comprehend
the cause of a prediction to identify intervention mechanisms that might affect
the decision making to secure process performance. In this paper, we propose
XNAP, the first explainable, DNN-based PBPM technique for the next activity
prediction. XNAP integrates a layer-wise relevance propagation method from the
field of explainable artificial intelligence to make predictions of a long
short-term memory DNN explainable by providing relevance values for activities.
We show the benefit of our approach through two real-life event logs
A Taxonomy of Explainable Bayesian Networks
Artificial Intelligence (AI), and in particular, the explainability thereof,
has gained phenomenal attention over the last few years. Whilst we usually do
not question the decision-making process of these systems in situations where
only the outcome is of interest, we do however pay close attention when these
systems are applied in areas where the decisions directly influence the lives
of humans. It is especially noisy and uncertain observations close to the
decision boundary which results in predictions which cannot necessarily be
explained that may foster mistrust among end-users. This drew attention to AI
methods for which the outcomes can be explained. Bayesian networks are
probabilistic graphical models that can be used as a tool to manage
uncertainty. The probabilistic framework of a Bayesian network allows for
explainability in the model, reasoning and evidence. The use of these methods
is mostly ad hoc and not as well organised as explainability methods in the
wider AI research field. As such, we introduce a taxonomy of explainability in
Bayesian networks. We extend the existing categorisation of explainability in
the model, reasoning or evidence to include explanation of decisions. The
explanations obtained from the explainability methods are illustrated by means
of a simple medical diagnostic scenario. The taxonomy introduced in this paper
has the potential not only to encourage end-users to efficiently communicate
outcomes obtained, but also support their understanding of how and, more
importantly, why certain predictions were made