571 research outputs found

    The coming of age of interpretable and explainable machine learning models

    Get PDF
    Machine learning-based systems are now part of a wide array of real-world applications seamlessly embedded in the social realm. In the wake of this realisation, strict legal regulations for these systems are currently being developed, addressing some of the risks they may pose. This is the coming of age of the interpretability and explainability problems in machine learning-based data analysis, which can no longer be seen just as an academic research problem. In this tutorial, associated to ESANN 2021 special session on “Interpretable Models in Machine Learning and Explainable Artificial Intelligence”, we discuss explainable and interpretable machine learning as post-hoc and ante-hoc strategies to address these problems and highlight several aspects related to them, including their assessment. The contributions accepted for the session are then presented in this contextPeer ReviewedPostprint (published version

    Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

    Get PDF
    This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C1011198) , (Institute for Information & communications Technology Planning & Evaluation) (IITP) grant funded by the Korea government (MSIT) under the ICT Creative Consilience Program (IITP-2021-2020-0-01821) , and AI Platform to Fully Adapt and Reflect Privacy-Policy Changes (No. 2022-0-00688).Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI mode ľs decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.National Research Foundation of Korea Ministry of Science, ICT & Future Planning, Republic of Korea Ministry of Science & ICT (MSIT), Republic of Korea 2021R1A2C1011198Institute for Information amp; communications Technology Planning amp; Evaluation) (IITP) - Korea government (MSIT) under the ICT Creative Consilience Program IITP-2021-2020-0-01821AI Platform to Fully Adapt and Reflect Privacy-Policy Changes2022-0-0068

    Explainable-by-Design Deep Learning

    Get PDF
    Machine learning, and more specifically, deep learning, have attracted the attention of media and the broader public in the last decade due to its potential to revolutionize industries, public services, and society. Deep learning achieved or even surpassed human experts’ performance in terms of accuracy for different challenging problems such as image recognition, speech, and language translation. However, deep learning models are often characterized as a “black box” as these models are composed of many millions of parameters, which are extremely difficult to interpret by specialists. Complex “black box” models can easily fool users unable to inspect the algorithm’s decision, which can lead to dangerous or catastrophic events. Therefore, auditable explainable AI approaches are crucial for developing safe systems, complying with regulations, and accepting this new technology within society. This thesis tries to answer the following research question: Is it possible to provide an approach that has a performance compared to a Deep Learning and the same time has a transparent structure (non-black box)? To this end, it introduces a novel framework of explainable- by-design Deep Learning architectures that offers transparency and high accuracy, helping humans understand why a particular machine decision has been reached and whether or not it is trustworthy. Moreover, the proposed prototype-based framework has a flexible structure that allows the unsupervised detection of new classes and situations. The approaches proposed in thesis have been applied to multiple use cases, including image classification, fairness, deep recursive learning interpretation, and novelty detection

    Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561

    Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability

    The Coming of Age of Interpretable and Explainable Machine Learning Models

    Get PDF
    Machine learning-based systems are now part of a wide array of real-world applications seamlessly embedded in the social realm. In the wake of this realisation, strict legal regulations for these systems are currently being developed, addressing some of the risks they may pose. This is the coming of age of the interpretability and explainability problems in machine learning-based data analysis, which can no longer be seen just as an academic research problem. In this tutorial, associated to ESANN 2021 special session on “Interpretable Models in Machine Learning and Explainable Artificial Intelligence”, we discuss explainable and interpretable machine learning as post-hoc and ante-hoc strategies to address these problems and highlight several aspects related to them, including their assessment. The contributions accepted for the session are then presented in this contex

    On the Design, Implementation and Application of Novel Multi-disciplinary Techniques for explaining Artificial Intelligence Models

    Get PDF
    284 p.Artificial Intelligence is a non-stopping field of research that has experienced some incredible growth lastdecades. Some of the reasons for this apparently exponential growth are the improvements incomputational power, sensing capabilities and data storage which results in a huge increment on dataavailability. However, this growth has been mostly led by a performance-based mindset that has pushedmodels towards a black-box nature. The performance prowess of these methods along with the risingdemand for their implementation has triggered the birth of a new research field. Explainable ArtificialIntelligence. As any new field, XAI falls short in cohesiveness. Added the consequences of dealing withconcepts that are not from natural sciences (explanations) the tumultuous scene is palpable. This thesiscontributes to the field from two different perspectives. A theoretical one and a practical one. The formeris based on a profound literature review that resulted in two main contributions: 1) the proposition of anew definition for Explainable Artificial Intelligence and 2) the creation of a new taxonomy for the field.The latter is composed of two XAI frameworks that accommodate in some of the raging gaps found field,namely: 1) XAI framework for Echo State Networks and 2) XAI framework for the generation ofcounterfactual. The first accounts for the gap concerning Randomized neural networks since they havenever been considered within the field of XAI. Unfortunately, choosing the right parameters to initializethese reservoirs falls a bit on the side of luck and past experience of the scientist and less on that of soundreasoning. The current approach for assessing whether a reservoir is suited for a particular task is toobserve if it yields accurate results, either by handcrafting the values of the reservoir parameters or byautomating their configuration via an external optimizer. All in all, this poses tough questions to addresswhen developing an ESN for a certain application, since knowing whether the created structure is optimalfor the problem at hand is not possible without actually training it. However, some of the main concernsfor not pursuing their application is related to the mistrust generated by their black-box" nature. Thesecond presents a new paradigm to treat counterfactual generation. Among the alternatives to reach auniversal understanding of model explanations, counterfactual examples is arguably the one that bestconforms to human understanding principles when faced with unknown phenomena. Indeed, discerningwhat would happen should the initial conditions differ in a plausible fashion is a mechanism oftenadopted by human when attempting at understanding any unknown. The search for counterfactualsproposed in this thesis is governed by three different objectives. Opposed to the classical approach inwhich counterfactuals are just generated following a minimum distance approach of some type, thisframework allows for an in-depth analysis of a target model by means of counterfactuals responding to:Adversarial Power, Plausibility and Change Intensity
    corecore