48 research outputs found

    The Grammar of Interactive Explanatory Model Analysis

    Full text link
    The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, which inevitably leads to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory interpretations of the same phenomenon. Surprisingly, the majority of methods developed for explainable machine learning focus on a single aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper presents how different Explanatory Model Analysis (EMA) methods complement each other and why it is essential to juxtapose them together. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe potential human-model dialogues. IEMA is implemented in the human-centered framework that adopts interactivity, customizability and automation as its main traits. Combined, these methods enhance the responsible approach to predictive modeling.Comment: 17 pages, 10 figures, 3 table

    Logic and Pragmatics in AI Explanation (Chapter)

    Get PDF
    This paper reviews logical approaches and challenges raised for explaining AI. We discuss the issues of presenting explanations as accurate computational models that users cannot understand or use. Then, we introduce pragmatic approaches that consider explanation a sort of speech act that commits to felicity conditions, including intelligibility, trustworthiness, and usefulness to the users. We argue Explainable AI (XAI) is more than a matter of accurate and complete computational explanation, that it requires pragmatics to address the issues it seeks to address. At the end of this paper, we draw a historical analogy to usability. This term was understood logically and pragmatically, but that has evolved empirically through time to become more prosperous and more functional

    Explaining recommendations in an interactive hybrid social recommender

    Get PDF
    Hybrid social recommender systems use social relevance from multiple sources to recommend relevant items or people to users. To make hybrid recommendations more transparent and controllable, several researchers have explored interactive hybrid recommender interfaces, which allow for a user-driven fusion of recommendation sources. In this field of work, the intelligent user interface has been investigated as an approach to increase transparency and improve the user experience. In this paper, we attempt to further promote the transparency of recommendations by augmenting an interactive hybrid recommender interface with several types of explanations. We evaluate user behavior patterns and subjective feedback by a within-subject study (N=33). Results from the evaluation show the effectiveness of the proposed explanation models. The result of post-treatment survey indicates a significant improvement in the perception of explainability, but such improvement comes with a lower degree of perceived controllability

    How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

    Full text link
    Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness. Various and significantly different algorithmic methods to provide this explainability have been introduced in the field, but the existing literature in the machine learning community has paid little attention to the stakeholder whose needs are rather studied in the human-computer interface community. Therefore, organizations that want or need to provide this explainability are confronted with the selection of an appropriate method for their use case. In this paper, we argue there is a need for a methodology to bridge the gap between stakeholder needs and explanation methods. We present our ongoing work on creating this methodology to help data scientists in the process of providing explainability to stakeholders. In particular, our contributions include documents used to characterize XAI methods and user requirements (shown in Appendix), which our methodology builds upon

    Designing for empowerment

    Get PDF
    Technology bears the potential to empower people - to help them tackle challenges they would otherwise give up on or not even try, to make experiences possible they did not have access to before. One type of such technologies - the application area of this thesis - is health and wellbeing technology (HWT), such as digital health records, physical activity trackers, or digital fitness coach applications. HWTs often claim to empower people to live healthier and happier lives. However, there is reason to challenge and critically reflect on these claims and underlying assumptions as more and more researchers are finding that technologies aiming or claiming to be empowering often turn out to be disempowering. This critical reflection is the starting point of this thesis: Can HWTs really empower people in their everyday lives? If so, how should we go about designing them to foster empowerment and avoid disempowerment? To this aim, this thesis makes three main contributions: First, it presents a framework of empowering technologies that aims to introduce conceptual and terminological clarity of empowerment in the field of Human-Computer Interaction (HCI). As a literature review conducted for this thesis reveals, the understandings of empowerment in HCI diverge substantially, rendering the term a subsumption of diverse research endeavors. The presented framework is informed by the results of the literature review as well as prior work on empowerment in social sciences, psychology, and philosophy. It aims to help other researchers to analyze conceptual differences between their own work and others’ and to position their research projects. In the same way, this thesis uses the proposed framework to analyze and reflect on the conducted case studies. Second, this thesis explores how HWT can empower people in a number of studies. Technologies that are investigated in these studies are divided into three interaction paradigms (derived from Beaudouin-Lafon’s interaction paradigms): Technologies that follow the computer-as-tool paradigm include patient-controlled electronic health records, and physical activity trackers; technologies in the computer-as-partner paradigm include personalized digital fitness coaches; and technologies in the computer-as-intelligent-tool paradigm includes transparently designed digital coaching technology. For each of these paradigms, I discuss benefits and shortcomings, as well as recommendations for future work. Third, I explore methods for designing and evaluating empowering technology. Therefore, I analyze and discuss methods that have been used in the different case studies to inform the design of empowering technologies such as interviews, observations, personality tests, experience sampling, or the Theory of Planned Behavior. Further, I present the design and evaluation of two tools that aimed to help researchers and designers evaluate empowering technologies by eliciting rich, contextualized feedback from users and fostering an empathic relationship between users and designers. I hope that my framework, design explorations, and evaluation tools will serve research on empowering technologies in HCI to develop a more grounded understanding, a clear research agenda, and inspire the development of a new class of empowering HWTs.Technologie für Empowerment — im Deutschen am besten mit Befähigung oder Ermächtigung übersetzt: diese Vision ist sowohl in medizinischen und technischen Fachkreisen als auch in der wissenschaftlichen Literatur im Feld Mensch-Maschine Interaktion (MMI) weit verbreitet. Technologie kann — laut dieser Vision — Menschen helfen Herausforderungen zu meistern, die sie sonst nicht schaffen oder nicht mal versuchen würden, oder Ihnen komplett neue Erfahrungen ermöglichen. Eine Art von “empowernden”, also befähigenden Technologien sind Technologien für Gesundheit und Wohlbefinden (health and wellbeing technologies, HWT), wie beispielsweise digitale Krankenakten, Schrittzähler, oder digitale Fitnesstrainer. Sowohl Werbung als auch Forschung über HWTs preist diese häufig als Schlüssel zu einem gesünderen und glücklicheren Leben an. Es gibt aber durchaus Gründe diesen Behauptungen kritisch gegenüberzustehen. So haben bereits einige Forschungsprojekte über vermeintlich “empowernde” Technologien ergeben, dass diese eher entmächtigen — also Ihre Nutzer mehr einschränken als Ihnen mehr Möglichkeiten zu verschaffen. Eine kritische Reflexion der Annahme, dass HWTs ihre Nutzer empowern stellt den Ausgangspunkt dieser Dissertation dar: Können HWTs ihre Nutzer wirklich empowern? Falls dem so ist, wie sollten sie am besten gestaltet werden? Der Beitrag meiner Dissertation zur Beantwortung dieser Fragen wird in drei Teilen präsentiert: Im ersten Teil stelle ich ein konzeptuelles Framework vor, mit dem Ziel terminologische Klarheit im Bereich Empowerment in MMI zu fördern. Eine Literaturanalyse im Rahmen dieser Dissertation hat ergeben, dass die Verwendungen des Begriffs “Empowerment” in der MMI Literatur sehr stark voneinander abweichen. Beispielsweise wird der Begriff in Literatur über Technologien für Barrierefreiheit anders verstanden als in Literatur über Technologien für bürgerliches Engagement. Folglich schert das Schlagwort “Technologien für Empowermen”, das in Präsentationen und Denkschriften weit verbreitet ist, komplett unterschiedliche Ansätze über einen Kamm. Das Framework, das in dieser Dissertation vorgestellt wird, zeigt die Unterschiede und Gemeinsamkeiten bei der Verwendung des Empowermentbegriffs auf. Es entstand als Resultat der Literaturanalyse und integriert gleichzeitig Erkenntnisse von Empowermenttheorien die in Sozialwissenschaften, Psychologie und Philosophie diskutiert wurden. In dieser Dissertation wird das vorgestellte Framework verwendet, um die präsentierten Studien über HWTs einzuordnen und zu diskutieren. Im zweiten Teil präsentiere ich verschiedene empirische und technische Studien mit dem Ziel zu verstehen wie HWTs Menschen empowern können. Die Technologien, die dabei untersucht werden teile ich in drei Interaktionsparadigmen ein (die von den Interaktionsparadigmen von Beaudouin-Lafon abgeleitet sind): Technologien im Paradigma Computerals- Werkzeug sind beispielsweise digitale Krankenakten und Schrittzähler; Technologien im Paradigma Computer-als-Partner sind beispielsweise digitale personalisierte Fitnesstrainer und Technologien im Paradigma Computer-als-intelligentes-Werkzeug sind beispielsweise transparent gestaltete digitale personalisierte Gesundheitsberater oder Fitnesstrainer. Vorund Nachteile von Technologien in diesen drei Paradigmen werden diskutiert und Empfehlungen für zukünftige Forschung in diesen Bereichen abgeleitet. Im dritten Teil, untersuche ich, welche Methoden für die Gestaltung und Evaluierung von empowernden Technologien geeignet sind. Einerseits diskutiere ich die Vor- und Nachteile der Methoden, die in den einzelnen Untersuchungen von HWTs (im zweiten Teil) verwendet wurden, wie zum Beispiel Interviews, Observationen, die Experience Sampling Methode oder Fragebögen basierend auf der Theorie des geplanten Verhaltens. Andererseits berichte ich über die Gestaltung und Entwicklung von zwei Applikationen mit dem Ziel Forschern und Designern die Evaluation von empowernden Technologien zu erleichtern. Konkret hat die erste Applikation das Ziel es Testnutzern zu ermöglichen immer und überall für sie wichtige Aspekte des Nutzererlebnisses an das Entwicklungsteam weiterzugeben. Bei der Entwicklung der zweiten Applikation stand dagegen die Förderung von Empathie zwischen Nutzern und Designern im Vordergrund. Ich hoffe, dass das vorgestellte Framework, die Studien über HWTs und Evaluationswerkezeuge die Forschung über empowernde Technologien voranbringen, zu einer klaren Forschungsagenda beitragen, und die Entwicklung von neuartigen HWTs anregen werden

    RIXA - Explaining Artificial Intelligence in Natural Language

    Get PDF
    Natural language is the instinctive form of communication humans use among each other. Recently large language models have drastically improved and made natural language interfaces viable for all kinds of applications. We argue that the use of natural language is a great tool to make explainable artificial intelligence (XAI) accessible to end users. We present our concept and work in progress implementation of a new kind of XAI dashboard that uses a natural language chat. We specify 5 design goals for the dashboard and show the current state of our implementation. The natural language chat is the main form of interaction for our new dashboard. Through it the user should be able to control all important aspects of our dashboard. We also define success metrics we want to use to evaluate our work. Most importantly we want to conduct user studies because we deem them to be the best method of evaluation for end-user-centered application

    Evaluating Visual Explanations for Similarity-Based Recommendations: User Perception and Performance

    Get PDF
    Recommender system helps users to reduce information overload. In recent years, enhancing explainability in recommender systems has drawn more and more attention in the field of Human-Computer Interaction (HCI). However, it is not clear whether a user-preferred explanation interface can maintain the same level of performance while the users are exploring or comparing the recommendations. In this paper, we introduced a participatory process of designing explanation interfaces with multiple explanatory goals for three similarity-based recommendation models. We investigate the relations of user perception and performance with two user studies. In the first study (N=15), we conducted card-sorting and semi-interview to identify the user preferred interfaces. In the second study (N=18), we carry out a performance-focused evaluation of six explanation interfaces. The result suggests that the user-preferred interface may not guarantee the same level of performance
    corecore