604 research outputs found

    Explanation by automated reasoning using the Isabelle Infrastructure framework

    Get PDF
    In this paper, we propose the use of interactive the- orem proving for explainable machine learning. After presenting our proposition, we illustrate it on the dedicated application of explaining security attacks using the Isabelle Infrastructure framework and its process of dependability engineering. This formal framework and process provides the logics for specifi- cation and modeling. Attacks on security of the system are ex- plained by specification and proofs in the Isabelle Infrastructure framework. Existing case studies of dependability engineering in Isabelle are used as feasibility studies to illustrate how different aspects of explanations are covered by the Isabelle Infrastructure framework

    Notions of explainability and evaluation approaches for explainable artificial intelligence

    Get PDF
    Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation. This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability and the evaluation approaches for XAI methods. The structure of this hierarchy builds on top of an exhaustive analysis of existing taxonomies and peer-reviewed scientific material. Findings suggest that scholars have identified numerous notions and requirements that an explanation should meet in order to be easily understandable by end-users and to provide actionable information that can inform decision making. They have also suggested various approaches to assess to what degree machine-generated explanations meet these demands. Overall, these approaches can be clustered into human-centred evaluations and evaluations with more objective metrics. However, despite the vast body of knowledge developed around the concept of explainability, there is not a general consensus among scholars on how an explanation should be defined, and how its validity and reliability assessed. Eventually, this review concludes by critically discussing these gaps and limitations, and it defines future research directions with explainability as the starting component of any artificial intelligent system

    Explainable AI over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions

    Full text link
    Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.Comment: 29 pages, 7 figures, 2 tables. IEEE Open Journal of the Communications Society (2022

    Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability

    Full text link
    Artificial Intelligence (AI) is rapidly integrating into various aspects of our daily lives, influencing decision-making processes in areas such as targeted advertising and matchmaking algorithms. As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial. Functional transparency is a fundamental aspect of algorithmic decision-making systems, allowing stakeholders to comprehend the inner workings of these systems and enabling them to evaluate their fairness and accuracy. However, achieving functional transparency poses significant challenges that need to be addressed. In this paper, we propose a design for user-centered compliant-by-design transparency in transparent systems. We emphasize that the development of transparent and explainable AI systems is a complex and multidisciplinary endeavor, necessitating collaboration among researchers from diverse fields such as computer science, artificial intelligence, ethics, law, and social science. By providing a comprehensive understanding of the challenges associated with transparency in AI systems and proposing a user-centered design framework, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.Comment: Hosain, M. T. , Anik, M. H. , Rafi, S. , Tabassum, R. , Insia, K. & S{\i}dd{\i}ky, M. M. (). Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability . Journal of Metaverse , 3 (2) , 166-180 . DOI: 10.57019/jmv.130668

    Software Engineering meets Artificial Intelligence

    Get PDF
    With the increasing use of AI in classic software systems, two worlds are coming closer and closer to each other that were previously rather alien to each other, namely the established discipline of software engineering and the world of AI. On the one hand, there are the data scientists, who try to extract as many insights as possible from the data using various tools, a lot of freedom and creativity. On the other hand, the software engineers, who have learned over years and decades to deliver the highest quality software possible and to manage release statuses. When developing software systems that include AI components, these worlds collide. This article shows which aspects come into play here, which problems can occur, and how solutions to these problems might look like. Beyond that, software engineering itself can benefit from the use of AI methods. Thus, we will also look at the emerging research area AI for software engineering

    Rethinking AI Explainability and Plausibility

    Full text link
    Setting proper evaluation objectives for explainable artificial intelligence (XAI) is vital for making XAI algorithms follow human communication norms, support human reasoning processes, and fulfill human needs for AI explanations. In this article, we examine explanation plausibility, which is the most pervasive human-grounded concept in XAI evaluation. Plausibility measures how reasonable the machine explanation is compared to the human explanation. Plausibility has been conventionally formulated as an important evaluation objective for AI explainability tasks. We argue against this idea, and show how optimizing and evaluating XAI for plausibility is sometimes harmful, and always ineffective to achieve model understandability, transparency, and trustworthiness. Specifically, evaluating XAI algorithms for plausibility regularizes the machine explanation to express exactly the same content as human explanation, which deviates from the fundamental motivation for humans to explain: expressing similar or alternative reasoning trajectories while conforming to understandable forms or language. Optimizing XAI for plausibility regardless of the model decision correctness also jeopardizes model trustworthiness, as doing so breaks an important assumption in human-human explanation namely that plausible explanations typically imply correct decisions, and violating this assumption eventually leads to either undertrust or overtrust of AI models. Instead of being the end goal in XAI evaluation, plausibility can serve as an intermediate computational proxy for the human process of interpreting explanations to optimize the utility of XAI. We further highlight the importance of explainability-specific evaluation objectives by differentiating the AI explanation task from the object localization task
    corecore