10 research outputs found

    Would You Help Me Voluntarily for the Next Two Years? Evaluating Psychological Persuasion Techniques in Human-Robot Interaction. First results of an empirical investigation of the door-in-the-face technique in human-robot interaction

    Get PDF
    Human-robot communication scenarios are becoming increasingly important. In this paper, we investigate the differences between human-human and human-robot communication in the context of persuasive communication. We ran an experiment using the door-in-the-face technique in a hu-manrobot context. In our experiment, participants communicated with a robot that performed the door-in-the-face technique, in which the communicating agent asks for an "extreme" favor first and a for a small favor shortly after to increase affirmative response to the second request. Our results show a surprisingly high acceptance rate for the extreme request and a smaller acceptance rate for the small request compared to the original study of Cialdini et al., so our results differ from the classical human-human door-in-the-face experiments. This suggests that human-robot persuasive communication differs from human-human communication, which is surprising given related work. We discuss potential reasons for our observations and outline the next research steps to answer the question whether the door-in-the-face and similar persuasive techniques would be effective if applied by robots. © 2023 Copyright for this paper by its authors

    Representations of the possible robot role in the social status above human (on the example of the debate game discourse)

    Get PDF
    В данном исследовании предпринята попытка выявления представлений о роли робота в дебатах, применяя комплексный подход в сборе материала (анкетирование респондентов с использованием закрытых и открытых типов вопросов, анализ вопросов о роботе в первые 20 минут взаимодействия, анализ обращений к роботу в ходе коммуникации в течение нескольких часов) и анализе данных: качественный и количественный контент-анализ, семантический и дискурсивный анализ. Ключевой категорией в обсуждении потенциальной роли робота в дебатах были его интеллектуальные возможности, при этом в непосредственном взаимодействии с роботом данная категория не актуализировалась. Изначальные пресуппозиции респондентов относительно робота допускали в большинстве его позицию ниже человека. Во время взаимодействия студенты воспринимали робота в роли таймкипера (позиция выше) как равного живого субъекта, робота включали в неформальное взаимодействие

    Building a Stronger CASA: Extending the Computers Are Social Actors Paradigm

    Get PDF
    The computers are social actors framework (CASA), derived from the media equation, explains how people communicate with media and machines demonstrating social potential. Many studies have challenged CASA, yet it has not been revised. We argue that CASA needs to be expanded because people have changed, technologies have changed, and the way people interact with technologies has changed. We discuss the implications of these changes and propose an extension of CASA. Whereas CASA suggests humans mindlessly apply human-human social scripts to interactions with media agents, we argue that humans may develop and apply human-media social scripts to these interactions. Our extension explains previous dissonant findings and expands scholarship regarding human-machine communication, human-computer interaction, human-robot interaction, human-agent interaction, artificial intelligence, and computer-mediated communication

    Health Psychol

    Get PDF
    Objective:Mobile technologies allow for accessible and cost-effective health monitoring and intervention delivery. Despite these advantages, mobile health (mHealth) engagement is often insufficient. While monetary incentives may increase engagement, they can backfire, dampening intrinsic motivations and undermining intervention scalability. Theories from psychology and behavioral economics suggest useful non-monetary strategies for promoting engagement; however, examinations of the applicability of these strategies to mHealth engagement are lacking. This proof-of-concept study evaluates the translation of theoretically-grounded engagement strategies into mHealth, by testing their potential utility in promoting daily self-reporting.Methods:A micro-randomized trial (MRT) was conducted with adolescents and emerging adults with past-month substance use. Participants were randomized multiple times daily to receive theoretically-grounded strategies, namely reciprocity (the delivery of inspirational quote prior to self-reporting window) and non-monetary reinforcers (e.g., the delivery of meme/gif following self-reporting completion) to improve proximal engagement in daily mHealth self-reporting.Results:Daily self-reporting rates (62.3%; n=68) were slightly lower than prior literature, albeit with much lower financial incentives. The utility of specific strategies was found to depend on contextual factors pertaining to the individual\u2019s receptivity and risk for disengagement. For example, the effect of reciprocity significantly varied depending on whether this strategy was employed (vs. not employed) during the weekend. The non-monetary reinforcement strategy resulted in different outcomes when operationalized in various ways.Conclusions:While the results support the translation of the reciprocity strategy into this mHealth setting, the translation of non-monetary reinforcement requires further consideration prior to inclusion in a full scale MRT.R49 CE002099/CE/NCIPC CDC HHSUnited States/Michigan Institute for Data Science/R49CE002099/ACL/ACL HHSUnited States/P50 DA039838/DA/NIDA NIH HHSUnited States/National Institutes of Health; National Institute on Alcohol Abuse and Alcoholism/R01 AA023187/AA/NIAAA NIH HHSUnited States/National Institutes of Health; National Institute on Drug Abuse/U01 CA229437/CA/NCI NIH HHSUnited States/U10 CA180819/CA/NCI NIH HHSUnited States/CC/CDC HHSUnited States/P41 EB028242/EB/NIBIB NIH HHSUnited States/R01 DA039901/DA/NIDA NIH HHSUnited States/2022-01-07T00:00:00Z34735165PMC873809810782vault:4067

    Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction

    Get PDF
    The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI’s inherent conscious or moral status

    Human-Machine Communication: Complete Volume. Volume 1

    Get PDF
    This is the complete volume of HMC Volume 1

    Designing social cues for effective persuasive robots

    Get PDF

    A psychology and game theory approach to human–robot cooperation

    Get PDF
    Social robots have great practical potentials to be applied to, for example, education, autism therapy, and commercial settings. However, currently, few commercially available social robots meet our expectations of ‘social agents’ due to their limited social skills and the abilities to maintain smooth and sophisticated rea-life social interactions. Psychological and human-centred perspectives are therefore crucial to be incorporated in for better understanding and development of social robots that can be deployed as assistants and companions to enhance human life quality. In this thesis, I present a research approach that draws together psychological literature, Open Science initiatives, and game theory paradigms, aiming to systemically and structurally investigate the cooperative and social aspects of human–robot interactions. In Chapter 1, the three components of this research approach are illustrated, with the main focus on their relevance and value in more rigorously researching human–robot interactions. Chapter 2 to 4 describe the three empirical studies that I adopted this research approach to examine the roles of contextual factors, personal factors, and robotic factors in human–robot interactions. Specifically, findings in Chapter 2 revealed that people’s cooperative decisions in prisoner’s dilemma games played with the embodied Cozmo robot were not influenced by the incentive structures of the games, contrary to the evidence from interpersonal prisoner’s dilemma games, but their decisions demonstrated a reciprocal (tit-for-tat) pattern in response to the robot opponent. In Chapter 3, we verified that this Cozmo robotic platform can displays highly recognisable emotional expressions to people, and people’s affective empathic might be counterintuitively associated with the emotion contagion effects of Cozmo’s emotional displays. Chapter 4 presents a study that examined the effects of Cozmo’s negative emotional displays on shaping people’s cooperative tendencies in prisoner’s dilemma games. We did not find evidence supporting an interaction between the effects of the robots’ emotions and people’s cooperative predispositions, which was inconsistent with our predictions informed by psychological emotion theories. However, exploratory analyses suggested that people who correctly recognised the Cozmo robots’ sad and angry expressions were less cooperative to the robots in games. Throughout the two studies on prisoner’s dilemma games played with the embodied Cozmo robots, we revealed consistent cooperative tendencies by people that cooperative willingness was the highest at the start of games and gradually decreased as more game rounds were played. In Chapter 5, I summarised the current findings and identified some limitations of these studies. Also, I outlined the future directions in relation to these topics, including further investigations into the generalisability of different robotic platforms and incorporating neurocognitive and qualitative methods for in-depth understanding of mechanisms supporting people’s cooperative willingness towards social robots. Social interactions with robots are highly dynamic and complex, which have brought about some unique challenges to robotic designers and researchers in the relevant fields. The thesis provides a point of departure for understanding cooperative willingness towards small-size social robots at a behavioural level. The research approach and empirical findings presented in the thesis could help enhance reproducibility in human–robot interaction research and more importantly, have practical implications of real-life human–robot cooperation

    The Role of Reciprocity in Verbally Persuasive Robots

    No full text
    corecore