3,402 research outputs found

    An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users

    Get PDF
    Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions. This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology. Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation. To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics. Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used. These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used. The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations. The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der KĂŒnstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von ErnĂŒchterung ĂŒber die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich fĂŒr verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre KomplexitĂ€t sind ihre Entscheidungen fĂŒr den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der ErklĂ€rbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschĂ€ftigten sich die Entwickler von intrinsisch erklĂ€rbaren KI-AnsĂ€tzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-ErklĂ€rungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche BedĂŒrfnisse und FĂ€higkeiten in die Gestaltung von KI-Schnittstellen integriert. DafĂŒr ist ein VerstĂ€ndnis darĂŒber erforderlich, wie Menschen XAI wahrnehmen und wie KI-ErklĂ€rungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI fĂŒr Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind. In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinĂ€ren Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von ErklĂ€rungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, ErklĂ€rungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezĂŒglich dem Design und der Untersuchung von XAI fĂŒr Nutzer prĂ€sentiert. Diese dienen als Grundlage fĂŒr das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten ErklĂ€rbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinĂ€ren Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert. Auf der Grundlage dieses interdisziplinĂ€ren Konzepts wird ein fĂŒnfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet. Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz fĂŒr HC-XAI vorgestellt und darauf aufbauend eine Vorlage fĂŒr den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen prĂ€sentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenĂŒber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen ĂŒber XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhĂ€ngt. Die Schritte drei bis fĂŒnf befassen sich mit der Gestaltung von XAI fĂŒr konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet. Diese werden fĂŒr drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische EntscheidungsunterstĂŒtzung. Hierzu wurden sechs Nutzerstudien durchgefĂŒhrt, die sich in der InteraktivitĂ€t des verwendeten XAI-Systems unterschieden. Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der ErklĂ€rung selbst abhĂ€ngen. Es hat sich beispielsweise gezeigt, dass ErklĂ€rungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der ErklĂ€rungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven ErklĂ€rungen verspĂŒren. Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukĂŒnftige Forschungsthemen, um ErklĂ€rungen in alltĂ€gliche KI-Systeme zu integrieren und damit den verstĂ€ndlichen Umgang mit KI fĂŒr alle Menschen zu ermöglichen

    A user perspective of quality of service in m-commerce

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2004 Springer VerlagIn an m-commerce setting, the underlying communication system will have to provide a Quality of Service (QoS) in the presence of two competing factors—network bandwidth and, as the pressure to add value to the business-to-consumer (B2C) shopping experience by integrating multimedia applications grows, increasing data sizes. In this paper, developments in the area of QoS-dependent multimedia perceptual quality are reviewed and are integrated with recent work focusing on QoS for e-commerce. Based on previously identified user perceptual tolerance to varying multimedia QoS, we show that enhancing the m-commerce B2C user experience with multimedia, far from being an idealised scenario, is in fact feasible if perceptual considerations are employed

    Robot Vulnerability and the Elicitation of User Empathy

    Get PDF
    This paper describes a between-subjects Amazon Mechanical Turk study (n = 220) that investigated how a robot’s affective narrative influences its ability to elicit empathy in human observers. We first conducted a pilot study to develop and validate the robot’s affective narratives. Then, in the full study, the robot used one of three different affective narrative strategies (funny, sad, neutral) while becoming less functional at its shopping task over the course of the interaction. As the functionality of the robot degraded, participants were repeatedly asked if they were willing to help the robot. The results showed that conveying a sad narrative significantly influenced the participants’ willingness to help the robot throughout the interaction and determined whether participants felt empathetic toward the robot throughout the interaction. Furthermore, a higher amount of past experience with robots also increased the participants’ willingness to help the robot. This work suggests that affective narratives can be useful in short-term interactions that benefit from emotional connections between humans and robot

    THE EFFECT OF INTERMEDIATE TRUST RATINGS ON AUTOMATION RELIANCE

    Get PDF
    As automated systems are increasingly capable of augmenting human decision-makers, appropriate reliance on automation has the potential to increase safety and efficiency in several high-stake domains. To that end, a solid understanding of how and under what conditions people rely on automation is needed to design decision aids that allow people to rely on them appropriately. Previous studies have used regular trust ratings during human–automation interactions to examine how trust develops and evolves, but such intermediate judgments might affect subsequent reliance decisions. This dissertation addresses a knowledge gap by empirically exploring how intermediate trust ratings affect automation reliance in human–automation interactions. A laboratory experiment in which 118 participants, supported by automated decision aids, identified UAVs in images was conducted to determine whether trust rating frequency, automation reliability, and participant motivation affected participant reliance behavior. Findings show that intermediate trust ratings increased automation reliance and retrospective trust ratings but did not affect response time. This dissertation proposes an extended theoretical model that might help explain and predict automation reliance. Additionally, it suggests that intermediate trust ratings might be suitable for calibrating automation reliance but not for research that seeks to measure trust without influencing reliance behavior.Approved for public release. Distribution is unlimited.Major, Swedish Arm

    Robot Vulnerability and the Elicitation of User Empathy

    Full text link
    This paper describes a between-subjects Amazon Mechanical Turk study (n = 220) that investigated how a robot's affective narrative influences its ability to elicit empathy in human observers. We first conducted a pilot study to develop and validate the robot's affective narratives. Then, in the full study, the robot used one of three different affective narrative strategies (funny, sad, neutral) while becoming less functional at its shopping task over the course of the interaction. As the functionality of the robot degraded, participants were repeatedly asked if they were willing to help the robot. The results showed that conveying a sad narrative significantly influenced the participants' willingness to help the robot throughout the interaction and determined whether participants felt empathetic toward the robot throughout the interaction. Furthermore, a higher amount of past experience with robots also increased the participants' willingness to help the robot. This work suggests that affective narratives can be useful in short-term interactions that benefit from emotional connections between humans and robots.Comment: Published by and copyright protected by IEEE, 8 pages, 4 figures, 31st IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2022

    The Role of Human-Automation Consensus in Multiple Unmanned Vehicle Scheduling

    Get PDF
    Objective: This study examined the impact of increasing automation replanning rates on operator performance and workload when supervising a decentralized network of heterogeneous unmanned vehicles. Background: Futuristic unmanned vehicles systems will invert the operator-to-vehicle ratio so that one operator can control multiple dissimilar vehicles connected through a decentralized network. Significant human-automation collaboration will be needed because of automation brittleness, but such collaboration could cause high workload. Method: Three increasing levels of replanning were tested on an existing multiple unmanned vehicle simulation environment that leverages decentralized algorithms for vehicle routing and task allocation in conjunction with human supervision. Results: Rapid replanning can cause high operator workload, ultimately resulting in poorer overall system performance. Poor performance was associated with a lack of operator consensus for when to accept the automation’s suggested prompts for new plan consideration as well as negative attitudes toward unmanned aerial vehicles in general. Participants with video game experience tended to collaborate more with the automation, which resulted in better performance. Conclusion: In decentralized unmanned vehicle networks, operators who ignore the automation’s requests for new plan consideration and impose rapid replans both increase their own workload and reduce the ability of the vehicle network to operate at its maximum capacity. Application: These findings have implications for personnel selection and training for futuristic systems involving human collaboration with decentralized algorithms embedded in networks of autonomous systems.Aurora Flight Sciences Corp.United States. Office of Naval Researc

    On the Integration of Adaptive and Interactive Robotic Smart Spaces

    Get PDF
    © 2015 Mauro Dragone et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)Enabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the user’s acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree – to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving users’ needs,but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.Peer reviewe

    Investigating Human Perceptions of Trust and Social Cues in Robots for Safe Human-Robot Interaction in Human-oriented Environments

    Get PDF
    As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. This thesis envisages a future where autonomous robots could be used as home companions to assist and collaborate with their human partners in unstructured environments without the support of any roboticist or expert. To realise such a vision, it is important to identify which factors (e.g. trust, participants’ personalities and background etc.) that influence people to accept robots’ as companions and trust the robots to look after their well-being. I am particularly interested in the possibility of robots using social behaviours and natural communications as a repair mechanism to positively influence humans’ sense of trust and companionship towards the robots. The main reason being that trust can change over time due to different factors (e.g. perceived erroneous robot behaviours). In this thesis, I provide guidelines for a robot to regain human trust by adopting certain human-like behaviours. I can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occurs with any other electrical consumer devices. For example, these might include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. In this context, several open questions need to be addressed regarding both individuals’ perceptions of the errors and robots, and the effects of these on people’s trust in robots. As a first step, I investigated how the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impact on users’ attitudes towards a domestic robot. I concluded that there is a correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust was strongly affected by robot errors that had severe consequences. This led us to investigate whether people’s awareness of robots’ functionalities may affect their trust in a robot. I found that people’s acceptance and trust in the robot may be affected by their knowledge of the robot’s capabilities and its limitations differently according the participants’ age and the robot’s embodiment. In order to deploy robots in the wild, strategies for mitigating and re-gaining people’s trust in robots in case of errors needs to be implemented. In the following three studies, I assessed if a robot with awareness of human social conventions would increase people’s trust in the robot. My findings showed that people almost blindly trusted a social and a non-social robot in scenarios with non-severe error consequences. In contrast, people that interacted with a social robot did not trust its suggestions in a scenario with a higher risk outcome. Finally, I investigated the effects of robots’ errors on people’s trust of a robot over time. The findings showed that participants’ judgement of a robot is formed during the first stage of their interaction. Therefore, people are more inclined to lose trust in a robot if it makes big errors at the beginning of the interaction. The findings from the Human-Robot Interaction experiments presented in this thesis will contribute to an advanced understanding of the trust dynamics between humans and robots for a long-lasting and successful collaboration
    • 

    corecore