2,200 research outputs found

    Explainable Artificial Intelligence (XAI) from a user perspective- A synthesis of prior literature and problematizing avenues for future research

    Full text link
    The final search query for the Systematic Literature Review (SLR) was conducted on 15th July 2022. Initially, we extracted 1707 journal and conference articles from the Scopus and Web of Science databases. Inclusion and exclusion criteria were then applied, and 58 articles were selected for the SLR. The findings show four dimensions that shape the AI explanation, which are format (explanation representation format), completeness (explanation should contain all required information, including the supplementary information), accuracy (information regarding the accuracy of the explanation), and currency (explanation should contain recent information). Moreover, along with the automatic representation of the explanation, the users can request additional information if needed. We have also found five dimensions of XAI effects: trust, transparency, understandability, usability, and fairness. In addition, we investigated current knowledge from selected articles to problematize future research agendas as research questions along with possible research paths. Consequently, a comprehensive framework of XAI and its possible effects on user behavior has been developed

    Human-centric explanation facilities

    Get PDF

    Scenario-based requirements elicitation for user-centric explainable AI

    Get PDF
    Explainable Artificial Intelligence (XAI) develops technical explanation methods and enable interpretability for human stakeholders on why Artificial Intelligence (AI) and machine learning (ML) models provide certain predictions. However, the trust of those stakeholders into AI models and explanations is still an issue, especially domain experts, who are knowledgeable about their domain but not AI inner workings. Social and user-centric XAI research states it is essential to understand the stakeholder’s requirements to provide explanations tailored to their needs, and enhance their trust in working with AI models. Scenario-based design and requirements elicitation can help bridge the gap between social and operational aspects of a stakeholder early before the adoption of information systems and identify its real problem and practices generating user requirements. Nevertheless, it is still rarely explored the adoption of scenarios in XAI, especially in the domain of fraud detection to supporting experts who are about to work with AI models. We demonstrate the usage of scenario-based requirements elicitation for XAI in a fraud detection context, and develop scenarios derived with experts in banking fraud. We discuss how those scenarios can be adopted to identify user or expert requirements for appropriate explanations in his daily operations and to make decisions on reviewing fraudulent cases in banking. The generalizability of the scenarios for further adoption is validated through a systematic literature review in domains of XAI and visual analytics for fraud detection

    Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI

    Full text link
    While research on explainable AI (XAI) is booming and explanation techniques have proven promising in many application domains, standardised human-centred evaluation procedures are still missing. In addition, current evaluation procedures do not assess XAI methods holistically in the sense that they do not treat explanations' effects on humans as a complex user experience. To tackle this challenge, we propose to adapt the User-Centric Evaluation Framework used in recommender systems: we integrate explanation aspects, summarise explanation properties, indicate relations between them, and categorise metrics that measure these properties. With this comprehensive evaluation framework, we hope to contribute to the human-centred standardisation of XAI evaluation.Comment: This preprint has not undergone any post-submission improvements or corrections. This work was an accepted contribution at the XAI world Conference 202

    NOTION OF EXPLAINABLE ARTIFICIAL INTELLIGENCE - AN EMPIRICAL INVESTIGATION FROM A USER\u27S PERSPECTIVE

    Get PDF
    The growing attention on artificial intelligence-based decision-making has led to research interest in the explainability and interpretability of machine learning models, algorithmic transparency, and comprehensibility. This renewed attention on XAI advocates the need to investigate end user-centric explainable AI, due to the universal adoption of AI-based systems at the root level. Therefore, this paper investigates user-centric explainable AI from a recommendation systems context. We conducted focus group interviews to collect qualitative data on the recommendation system. We asked participants about the end users\u27 comprehension of a recommended item, its probable explanation and their opinion of making a recommendation explainable. Our finding reveals end users want a non-technical and tailor-made explanation with on-demand supplementary information. Moreover, we also observed users would like to have an explanation about personal data usage, detailed user feedback, authentic and reliable explanations. Finally, we proposed a synthesized framework that will include end users in the XAI development process

    On the Multiple Roles of Ontologies in Explainable AI

    Get PDF
    This paper discusses the different roles that explicit knowledge, in particular ontologies, can play in Explainable AI and in the development of human-centric explainable systems and intelligible explanations. We consider three main perspectives in which ontologies can contribute significantly, namely reference modelling, common-sense reasoning, and knowledge refinement and complexity management. We overview some of the existing approaches in the literature, and we position them according to these three proposed perspectives. The paper concludes by discussing what challenges still need to be addressed to enable ontology-based approaches to explanation and to evaluate their human-understandability and effectiveness

    An HCAI Methodological Framework: Putting It Into Action to Enable Human-Centered AI

    Full text link
    Human-centered AI (HCAI), as a design philosophy, advocates prioritizing humans in designing, developing, and deploying intelligent systems, aiming to maximize the benefits of AI technology to humans and avoid its potential adverse effects. While HCAI has gained momentum, the lack of guidance on methodology in its implementation makes its adoption challenging. After assessing the needs for a methodological framework for HCAI, this paper first proposes a comprehensive and interdisciplinary HCAI methodological framework integrated with seven components, including design goals, design principles, implementation approaches, design paradigms, interdisciplinary teams, methods, and processes. THe implications of the framework are also discussed. This paper also presents a "three-layer" approach to facilitate the implementation of the framework. We believe the proposed framework is systematic and executable, which can overcome the weaknesses in current frameworks and the challenges currently faced in implementing HCAI. Thus, the framework can help put it into action to develop, transfer, and implement HCAI in practice, eventually enabling the design, development, and deployment of HCAI-based intelligent systems

    A User-Centric Approach to Explainable AI in Corporate Performance Management

    Get PDF
    Machine learning (ML) applications have surged in popularity in the industry, however, the lack of transparency of ML-models often impedes the usability of ML in practice. Especially in the corporate performance management (CPM) domain, transparency is crucial to support corporate decision-making processes. To address this challenge, approaches of explainable artificial intelligence (XAI) provide solutions to reduce the opacity of ML-based systems. This design science study further builds on prior user experience (UX) and user interface (UI) focused XAI-research, to develop a user-centric approach to XAI for the CPM field. As key results, we identify design principles in three decomposition layers, including ten explainability UI-elements that we developed and evaluated through seven interviews. These results complement prior research by focusing it on the CPM domain and provide practitioners with concrete guidelines to foster ML adoption in the CPM field
    corecore