20 research outputs found

    Following wrong suggestions: self-blame in human and computer scenarios

    Full text link
    This paper investigates the specific experience of following a suggestion by an intelligent machine that has a wrong outcome and the emotions people feel. By adopting a typical task employed in studies on decision-making, we presented participants with two scenarios in which they follow a suggestion and have a wrong outcome by either an expert human being or an intelligent machine. We found a significant decrease in the perceived responsibility on the wrong choice when the machine offers the suggestion. At present, few studies have investigated the negative emotions that could arise from a bad outcome after following the suggestion given by an intelligent system, and how to cope with the potential distrust that could affect the long-term use of the system and the cooperation. This preliminary research has implications in the study of cooperation and decision making with intelligent machines. Further research may address how to offer the suggestion in order to better cope with user's self-blame.Comment: To be published in the Proceedings of IFIP Conference on Human-Computer Interaction (INTERACT)201

    Do I trust a machine? Differences in user trust based on system performance

    Full text link
    Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system’s output serves as one of the inputs for the users’ decision making processes. In this chapter, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly. The results also show notable differences between two groups of users and indicate a possible threshold in the acceptance of the system. This important learning can be leveraged by designers of practical systems for sustaining the desired level of user trust

    Digital Trust fĂĽr KI-basierte Mensch-Maschine-Schnittstellen

    No full text
    Menschen sind seit Jahrzehnten davon fasziniert, Maschinen mit menschlichem Bewusstsein zu erschaffen. Künstliche Intelligenz (KI) trifft genau diesen Nerv und stößt aber gerade deshalb auf Misstrauen. Die zunehmende Entwicklung innovativer Produkte, die auf KI basieren, führen zu einer Erleichterung im Alltag der Menschen sowie zu einer Revolution der Arbeitswelt. Zum anderen erzeugen Medienberichte über Datenmissbrauch, Abhöraktionen und über die KI als Gefahr für die Menschheit Misstrauen. Dies kann in eine ablehnende Haltung gegenüber dem technologischen Fortschritt resultieren. Ziel dieses Artikels ist es, einen Beitrag zum Aufbau eines berechtigten digitalen Vertrauens zu leisten. Hierzu werden Kenntnisse zu humanen Vertrauensquellen und -mustern zusammengeführt und ein Trust-Journey-Ansatz zum Vertrauensaufbau für das Marketing entwickelt. Implikationen einer spezifischen KI-Trust-Journey werden am Beispiel von Voice User Interfaces (VUI) wie z. B. Amazon Alexa, Tmall Genie von Alibaba, Alice und Google Home konkretisiert. Abschließend werden Prinzipien des digitalen Vertrauensaufbaus empfohlen, um Marken und innovative, digitale Produkte aus der Vertrauensperspektive zu stärken und die wesentliche Rolle des Marketings in der menschzentrierten digitalen Produktentwicklung herauszuarbeiten

    In AI we trust? Perceptions about automated decision-making by artificial intelligence

    No full text
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial contexts. Data from a scenario-based survey experiment with a national sample (N = 958) show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed
    corecore