28 research outputs found
On the Design, Implementation and Application of Novel Multi-disciplinary Techniques for explaining Artificial Intelligence Models
284 p.Artificial Intelligence is a non-stopping field of research that has experienced some incredible growth lastdecades. Some of the reasons for this apparently exponential growth are the improvements incomputational power, sensing capabilities and data storage which results in a huge increment on dataavailability. However, this growth has been mostly led by a performance-based mindset that has pushedmodels towards a black-box nature. The performance prowess of these methods along with the risingdemand for their implementation has triggered the birth of a new research field. Explainable ArtificialIntelligence. As any new field, XAI falls short in cohesiveness. Added the consequences of dealing withconcepts that are not from natural sciences (explanations) the tumultuous scene is palpable. This thesiscontributes to the field from two different perspectives. A theoretical one and a practical one. The formeris based on a profound literature review that resulted in two main contributions: 1) the proposition of anew definition for Explainable Artificial Intelligence and 2) the creation of a new taxonomy for the field.The latter is composed of two XAI frameworks that accommodate in some of the raging gaps found field,namely: 1) XAI framework for Echo State Networks and 2) XAI framework for the generation ofcounterfactual. The first accounts for the gap concerning Randomized neural networks since they havenever been considered within the field of XAI. Unfortunately, choosing the right parameters to initializethese reservoirs falls a bit on the side of luck and past experience of the scientist and less on that of soundreasoning. The current approach for assessing whether a reservoir is suited for a particular task is toobserve if it yields accurate results, either by handcrafting the values of the reservoir parameters or byautomating their configuration via an external optimizer. All in all, this poses tough questions to addresswhen developing an ESN for a certain application, since knowing whether the created structure is optimalfor the problem at hand is not possible without actually training it. However, some of the main concernsfor not pursuing their application is related to the mistrust generated by their black-box" nature. Thesecond presents a new paradigm to treat counterfactual generation. Among the alternatives to reach auniversal understanding of model explanations, counterfactual examples is arguably the one that bestconforms to human understanding principles when faced with unknown phenomena. Indeed, discerningwhat would happen should the initial conditions differ in a plausible fashion is a mechanism oftenadopted by human when attempting at understanding any unknown. The search for counterfactualsproposed in this thesis is governed by three different objectives. Opposed to the classical approach inwhich counterfactuals are just generated following a minimum distance approach of some type, thisframework allows for an in-depth analysis of a target model by means of counterfactuals responding to:Adversarial Power, Plausibility and Change Intensity
Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples
The last decade has witnessed the proliferation of Deep Learning models in
many applications, achieving unrivaled levels of predictive performance.
Unfortunately, the black-box nature of Deep Learning models has posed
unanswered questions about what they learn from data. Certain application
scenarios have highlighted the importance of assessing the bounds under which
Deep Learning models operate, a problem addressed by using assorted approaches
aimed at audiences from different domains. However, as the focus of the
application is placed more on non-expert users, it results mandatory to provide
the means for him/her to trust the model, just like a human gets familiar with
a system or process: by understanding the hypothetical circumstances under
which it fails. This is indeed the angular stone for this research work: to
undertake an adversarial analysis of a Deep Learning model. The proposed
framework constructs counterfactual examples by ensuring their plausibility,
e.g. there is a reasonable probability that a human could generate them without
resorting to a computer program. Therefore, this work must be regarded as
valuable auditing exercise of the usable bounds a certain model is constrained
within, thereby allowing for a much greater understanding of the capabilities
and pitfalls of a model used in a real application. To this end, a Generative
Adversarial Network (GAN) and multi-objective heuristics are used to furnish a
plausible attack to the audited model, efficiently trading between the
confusion of this model, the intensity and plausibility of the generated
counterfactual. Its utility is showcased within a human face classification
task, unveiling the enormous potential of the proposed framework.Comment: 7 pages, 5 figures. Accepted for its presentation at WCCI 202
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed
appropriately, may deliver the best of expectations over many application sectors across the field. For this
to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability,
an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural
Networks) that were not present in the last hype of AI (namely, expert systems and rule based models).
Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely
acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in
this article examines the existing literature and contributions already done in the field of XAI, including a
prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define
explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that
covers such prior conceptual propositions with a major focus on the audience for which the explainability
is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions
related to the explainability of different Machine Learning models, including those aimed at explaining
Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This
critical literature analysis serves as the motivating background for a series of challenges faced by XAI,
such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept
of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI
methods in real organizations with fairness, model explainability and accountability at its core. Our
ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve
as reference material in order to stimulate future research advances, but also to encourage experts and
professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any
prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability
A Taxonomy of Explainable Bayesian Networks
Artificial Intelligence (AI), and in particular, the explainability thereof,
has gained phenomenal attention over the last few years. Whilst we usually do
not question the decision-making process of these systems in situations where
only the outcome is of interest, we do however pay close attention when these
systems are applied in areas where the decisions directly influence the lives
of humans. It is especially noisy and uncertain observations close to the
decision boundary which results in predictions which cannot necessarily be
explained that may foster mistrust among end-users. This drew attention to AI
methods for which the outcomes can be explained. Bayesian networks are
probabilistic graphical models that can be used as a tool to manage
uncertainty. The probabilistic framework of a Bayesian network allows for
explainability in the model, reasoning and evidence. The use of these methods
is mostly ad hoc and not as well organised as explainability methods in the
wider AI research field. As such, we introduce a taxonomy of explainability in
Bayesian networks. We extend the existing categorisation of explainability in
the model, reasoning or evidence to include explanation of decisions. The
explanations obtained from the explainability methods are illustrated by means
of a simple medical diagnostic scenario. The taxonomy introduced in this paper
has the potential not only to encourage end-users to efficiently communicate
outcomes obtained, but also support their understanding of how and, more
importantly, why certain predictions were made
XNAP: Making LSTM-based Next Activity Predictions Explainable by Using LRP
Predictive business process monitoring (PBPM) is a class of techniques
designed to predict behaviour, such as next activities, in running traces. PBPM
techniques aim to improve process performance by providing predictions to
process analysts, supporting them in their decision making. However, the PBPM
techniques` limited predictive quality was considered as the essential obstacle
for establishing such techniques in practice. With the use of deep neural
networks (DNNs), the techniques` predictive quality could be improved for tasks
like the next activity prediction. While DNNs achieve a promising predictive
quality, they still lack comprehensibility due to their hierarchical approach
of learning representations. Nevertheless, process analysts need to comprehend
the cause of a prediction to identify intervention mechanisms that might affect
the decision making to secure process performance. In this paper, we propose
XNAP, the first explainable, DNN-based PBPM technique for the next activity
prediction. XNAP integrates a layer-wise relevance propagation method from the
field of explainable artificial intelligence to make predictions of a long
short-term memory DNN explainable by providing relevance values for activities.
We show the benefit of our approach through two real-life event logs
Entrenamiento de algoritmos de deep learning para la detección de objetos con la base de datos Cityscapes
Debido a la necesidad de la sociedad por el transporte cada día hay más vehículos en las
carreteras, lo que conlleva un aumento de la peligrosidad de éstas. Por este motivo nace la
necesidad de ADAS (Sistemas Avanzados de Ayuda a la Conducción). En este proyecto se ha
desarrollado un conversor que es capaz de convertir la librería de imágenes etiquetadas
Cityscapes al formato de la base de datos KITTI de manera que se ha conseguido tener una
base de datos mayor. Esta base de datos ha permitido entrenar un algoritmo de deep learning
de manera que este es capaz de detectar y diferenciar objetos de la vía urbana.Due to the society's need for transportation, there are more vehicles on the road every day,
which leads to an increase in the danger of these roads. For this reason the need arises for
ADAS (Advanced Driving Assistance Systems). In this project a converter has been developed
that is able to convert the Cityscapes image library to the KITTI database format so that a
larger database has been achieved. This database has allowed to train a deep learning
algorithm so that it is able to detect and differentiate objects from the urban road.Ingeniería Electrónica Industrial y Automátic
What Lies Beneath: A Note on the Explainability of Black-box Machine Learning Models for Road Traffic Forecasting
Publisher Copyright: © 2019 IEEE.Traffic flow forecasting is widely regarded as an essential gear in the complex machinery underneath Intelligent Transport Systems, being a critical component of avant-garde Automated Traffic Management Systems. Research in this area has stimulated a vibrant activity, yielding a plethora of new forecasting methods contributed to the community on a yearly basis. Efforts in this domain are mainly oriented to the development of prediction models featuring with ever-growing levels of performances and/or computational efficiency. After the swerve towards Artificial Intelligence that gradually took place in the modeling sphere of traffic forecasting, predictive schemes have ever since adopted all the benefits of applied machine learning, but have also incurred some caveats. The adoption of highly complex, black-box models has subtracted comprehensibility to forecasts: even though they perform better, they are more obscure to ITS practitioners, which hinders their practicality. In this paper we propose the adoption of explainable Artificial Intelligence (xAI) tools that are currently being used in other domains, in order to extract further knowledge from black-box traffic forecasting models. In particular we showcase the utility of xAI to unveil the knowledge extracted by Random Forests and Recurrent Neural Networks when predicting real traffic. The obtained results are insightful and suggest that the traffic forecasting model should be analyzed from more points of view beyond that of prediction accuracy or any other regression score alike, due to the treatment each algorithm gives to input variables: even with the same nominal score value, some methods can take advantage of inner knowledge that others instead disregard.The authors would like to thank the Basque Government for its support through the EMAITEK program.Peer reviewe