139 research outputs found

    To whom to explain and what? : Systematic literature review on empirical studies on Explainable Artificial Intelligence (XAI)

    Get PDF
    Expectations towards artificial intelligence (AI) have risen continuously because of machine learning models’ evolution. However, the models’ decisions are often not intuitively understandable. For this reason, the field of Explainable AI (XAI) has emerged, which tries to create different techniques to help users understand AI better. As AI’s use spreads more broadly in society, it becomes like a co-worker that people need to understand. For this reason, AI-human interaction in research is of broad and current interest. This thesis outlines the current empirical XAI research literature themes from the human-computer interaction (HCI) perspective. This study's method is an explorative, systematic literature review carried out following the PRISMA (Preferred Research Items for Systematic Reviews) method. In total, 29 articles that concluded an empirical study into XAI from the HCI perspective were included in the review. The material was collected based on database searches and snowball sampling. The articles were analyzed based on their descriptive statistics, stakeholder groups, research questions, and theoretical approaches. This study aims to determine what factors made users consider XAI transparent, explainable, or trustworthy and to whom the XAI research was intended. Based on the analysis, three stakeholder groups to whom the current XAI literature was aimed for emerged: end-users, domain experts, and developers. This study’s findings show that domain experts’ needs towards XAI vary greatly between domains, whereas developers need better tools to create XAI systems. The end-users, on their part, considered case-based explanations unfair and wanted to have explanations that “speak their language”. Also, the results indicate that the effect of current XAI solutions on users’ trust towards AI systems is relatively small or even non-existing. The studies’ direct theoretical contributions and the number of theoretical lenses used were both found out to be relatively low. This thesis’s most immense contribution is to provide a synthesis of the extant empirical XAI literature from the HCI perspective, which previous studies have rarely brought together. Continuing this thesis, researchers can further investigate research avenues such as explanation quality methodologies, algorithm auditing methods, users’ mental models, and prior conceptions about AI.Odotukset tekoälyä kohtaan ovat kohonneet jatkuvasti koneoppimismallien kehittymisen vuoksi. Mallien tekemät päätökset eivät usein ole ihmiskäyttäjälle vaistonvaraisesti ymmärrettävissä. Tätä ongelmaa ratkomaan on syntynyt selittävän tekoälyn tutkimuskenttä, joka luo erilaisia tekniikoita käyttäjien ymmärryksen tueksi. Kun tekoälyn käyttö yhteiskunnassa yleistyy laajemmin, tulee siitä ikään kuin työkaveri, jota ihmisten tulee ymmärtää. Tästä syystä tekoälyn ja ihmisen välisen vuorovaikutuksen tutkiminen on nyt laajan mielenkiinnon kohteena. Tässä pro gradu -tutkielmassa hahmotellaan selittävän tekoälyn tutkimuskentän ajankohtaisia teemoja, ihmisen ja tietokoneen välisen vuorovaikutuksen näkökulmasta. Tutkielman metodi on tutkiva, systemaattinen kirjallisuuskatsaus, ja se suoritettiin seuraten PRISMA-ohjeistusta. Katsaukseen valikoitui yhteensä 29 ihmisen ja tietokoneen vuorovaikutuksen näkökulmasta selittävää tekoälyä empiirisesti tutkinutta artikkelia. Aineisto kerättiin tietokantahakujen ja lumipallo-otannan avulla. Tutkimuksia eriteltiin artikkeleja kuvailevien tietojen, niiden kohdeyleisön, tutkimuskysymysten sekä teoreettisten lähestymistapojen kautta. Tutkielman tarkoituksena on selvittää, millaiset tekijät saivat käyttäjät pitämään tekoälyä läpinäkyvänä, selitettävissä olevana tai luotettavana, sekä kenelle aihepiirin tutkimus oli suunnattu. Analyysin perusteella löytyi kolme ryhmää, joille nykyistä kirjallisuutta on suunnattu: loppukäyttäjät, toimialojen asiantuntijat sekä tekoälyn kehittäjät. Tutkielman tulokset osoittavat, että asiantuntijoiden tarpeet selittävää tekoälyä kohtaan vaihtelevat laajasti toimialojen välillä, kun taas sen kehittäjät kaipaisivat parempia työkaluja tuekseen. Loppukäyttäjien havaittiin pitävän tekoälyn antamia tapauskohtaisia esimerkkejä epäreiluina, ja haluavan juuri heitä puhuttelevia selityksiä. Tulokset ilmaisevat, että nykyisten selittävien tekoälytekniikoiden vaikutukset käyttäjien luottamukseen tekoälyä kohtaan ovat vähäisiä. Tutkimusten tieteellisen panosten ja niiden käyttämien teoreettisten näkökulmien määrän havaittiin olevan suhteellisen pieniä. Tämän tutkielman suurin tieteellinen panos on luoda yhteenveto empiiriseen, selittävän tekoälyn tutkimuskirjallisuuteen, ihmisen ja tietokoneen välisen vuorovaikutuksen näkökulmasta. Tätä näkökulmaa aiempi kirjallisuus on vain harvoin saattanut kokoon. Tutkielma avaa useita näkymiä jatkotutkimukselle, esimerkiksi selitysten laatumetodien, algoritmien auditointimenetelmien, käyttäjien ajatusmallien sekä aiempien käsitysten vaikutusten näkökulmista

    Explainable Artificial Intelligence (XAI) from a user perspective- A synthesis of prior literature and problematizing avenues for future research

    Full text link
    The final search query for the Systematic Literature Review (SLR) was conducted on 15th July 2022. Initially, we extracted 1707 journal and conference articles from the Scopus and Web of Science databases. Inclusion and exclusion criteria were then applied, and 58 articles were selected for the SLR. The findings show four dimensions that shape the AI explanation, which are format (explanation representation format), completeness (explanation should contain all required information, including the supplementary information), accuracy (information regarding the accuracy of the explanation), and currency (explanation should contain recent information). Moreover, along with the automatic representation of the explanation, the users can request additional information if needed. We have also found five dimensions of XAI effects: trust, transparency, understandability, usability, and fairness. In addition, we investigated current knowledge from selected articles to problematize future research agendas as research questions along with possible research paths. Consequently, a comprehensive framework of XAI and its possible effects on user behavior has been developed

    Notions of explainability and evaluation approaches for explainable artificial intelligence

    Get PDF
    Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation. This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability and the evaluation approaches for XAI methods. The structure of this hierarchy builds on top of an exhaustive analysis of existing taxonomies and peer-reviewed scientific material. Findings suggest that scholars have identified numerous notions and requirements that an explanation should meet in order to be easily understandable by end-users and to provide actionable information that can inform decision making. They have also suggested various approaches to assess to what degree machine-generated explanations meet these demands. Overall, these approaches can be clustered into human-centred evaluations and evaluations with more objective metrics. However, despite the vast body of knowledge developed around the concept of explainability, there is not a general consensus among scholars on how an explanation should be defined, and how its validity and reliability assessed. Eventually, this review concludes by critically discussing these gaps and limitations, and it defines future research directions with explainability as the starting component of any artificial intelligent system

    Leveraging Rationales to Improve Human Task Performance

    Full text link
    Machine learning (ML) systems across many application areas are increasingly demonstrating performance that is beyond that of humans. In response to the proliferation of such models, the field of Explainable AI (XAI) has sought to develop techniques that enhance the transparency and interpretability of machine learning methods. In this work, we consider a question not previously explored within the XAI and ML communities: Given a computational system whose performance exceeds that of its human user, can explainable AI capabilities be leveraged to improve the performance of the human? We study this question in the context of the game of Chess, for which computational game engines that surpass the performance of the average player are widely available. We introduce the Rationale-Generating Algorithm, an automated technique for generating rationales for utility-based computational methods, which we evaluate with a multi-day user study against two baselines. The results show that our approach produces rationales that lead to statistically significant improvement in human task performance, demonstrating that rationales automatically generated from an AI's internal task model can be used not only to explain what the system is doing, but also to instruct the user and ultimately improve their task performance.Comment: ACM IUI 202

    A Systematic Review of User Mental Models on Applications Sustainability

    Get PDF
    In Human-Computer Interaction (HCI), a user’s mental model affects application sustainability. This study's goal is to find and assess previous work in the area of user mental models and how it relates to the sustainability of application. Thus, a systematic review process was used to identify 641 initial articles, which were then screened based on inclusion and exclusion criteria. According to the review, it has been observed that the mental model of a user has an impact on the creation of applications not only within the domain of Human-Computer Interaction (HCI), but also in other domains such as Enterprise Innovation Ecology, Explainable Artificial Intelligence (XAI), Information Systems (IS), and various others. The examined articles discussed company managers' difficulties in prioritising innovation and ecology, and the necessity to understand users' mental models to build and evaluate intelligent systems. The reviewed articles mostly used experimental, questionnaire, observation, and interviews, by applying either qualitative, quantitative, or mixed-method methodologies. This study highlights the importance of user mental models in application sustainability, where developers may create apps that suit user demands, fit with cognitive psychology principles, and improve human-AI collaboration by understanding user mental models. This study also emphasises the importance of user mental models in the long-term viability and sustainability of applications, and provides significant insights for application developers and researchers in building more user-centric and sustainable applications

    Explanatory artificial intelligence (YAI): human-centered explanations of explainable AI and complex data

    Get PDF
    In this paper we introduce a new class of software tools engaged in delivering successful explanations of complex processes on top of basic Explainable AI (XAI) software systems. These tools, that we call cumulatively Explanatory AI (YAI) systems, enhance the quality of the basic output of a XAI by adopting a user-centred approach to explanation that can cater to the individual needs of the explainees with measurable improvements in usability. Our approach is based on Achinstein’s theory of explanations, where explaining is an illocutionary (i.e., broad yet pertinent and deliberate) act of pragmatically answering a question. Accordingly, user-centrality enters in the equation by considering that the overall amount of information generated by answering all questions can rapidly become overwhelming and that individual users may perceive the need to explore just a few of them. In this paper, we give the theoretical foundations of YAI, formally defining a user-centred explanatory tool and the space of all possible explanations, or explanatory space, generated by it. To this end, we frame the explanatory space as an hypergraph of knowledge and we identify a set of heuristics and properties that can help approximating a decomposition of it into a tree-like representation for efficient and user-centred explanation retrieval. Finally, we provide some old and new empirical results to support our theory, showing that explanations are more than textual or visual presentations of the sole information provided by a XAI
    corecore