160 research outputs found

    The influence of social cues in persuasive social robots on psychological reactance and compliance

    Get PDF
    People can react negatively to persuasive attempts experiencing reactance, which gives rise to negative feelings and thoughts and may reduce compliance. This research examines social responses towards persuasive social agents. We present a laboratory experiment which assessed reactance and compliance to persuasive attempts delivered by an artificial (non-robotic) social agent, a social robot with minimal social cues (human-like face with speech output and blinking eyes), and a social robot with enhanced social cues (human-like face with head movement, facial expression, affective intonation of speech output). Our results suggest that a social robot presenting more social cues will cause higher reactance and this effect is stronger when the user feels involved in the task at hand

    Understanding social responses to artificial agents : building blocks for persuasive technology

    Get PDF

    Building Persuasive Robots with Social Power Strategies

    Full text link
    Can social power endow social robots with the capacity to persuade? This paper represents our recent endeavor to design persuasive social robots. We have designed and run three different user studies to investigate the effectiveness of different bases of social power (inspired by French and Raven's theory) on peoples' compliance to the requests of social robots. The results show that robotic persuaders that exert social power (specifically from expert, reward, and coercion bases) demonstrate increased ability to influence humans. The first study provides a positive answer and shows that under the same circumstances, people with different personalities prefer robots using a specific social power base. In addition, social rewards can be useful in persuading individuals. The second study suggests that by employing social power, social robots are capable of persuading people objectively to select a less desirable choice among others. Finally, the third study shows that the effect of power on persuasion does not decay over time and might strengthen under specific circumstances. Moreover, exerting stronger social power does not necessarily lead to higher persuasion. Overall, we argue that the results of these studies are relevant for designing human--robot-interaction scenarios especially the ones aiming at behavioral change

    Applying Psychological Reactance Theory To Intercultural Communication In The Workplace: Dealing With Technological Change And Tolerance For Ambiguity

    Get PDF
    Psychological reactance theory has yet to be applied to intercultural and cross-cultural communication, at least not to a sufficient extent. This study conducted a cross-cultural examination of psychological reactance in intercultural workplace communication situations. Using the theoretical framework of psychological reactance as well as the constructs of intercultural sensitivity and tolerance for ambiguity, this study expanded applications of PRT for technological change messages in the workplace. The present study extended the previous applications of psychological reactance theory and found a significant cross-cultural variation for trait reactance. The results also revealed that tolerance for ambiguity was negatively related to trait reactance, but not related to intercultural sensitivity. Intercultural emotional sensitivity and tolerance for ambiguity both predicted intercultural state reactance. The intercultural and cross-cultural lenses of investigation extend PRT’s applications to a context of organizational change management, thus merging otherwise disparate lines of inquiry. KEYWORDS: psychological reactance, intercultural communication, tolerance for ambiguity, intercultural sensitivity, organizational change, cross-cultural communicatio

    Would You Obey an Aggressive Robot: A Human-Robot Interaction Field Study

    Full text link
    © 2018 IEEE. Social Robots have the potential to be of tremendous utility in healthcare, search and rescue, surveillance, transport, and military applications. In many of these applications, social robots need to advise and direct humans to follow important instructions. In this paper, we present the results of a Human-Robot Interaction field experiment conducted using a PR2 robot to explore key factors involved in obedience of humans to social robots. This paper focuses on studying how the human degree of obedience to a robot's instructions is related to the perceived aggression and authority of the robot's behavior. We implemented several social cues to exhibit and convey both authority and aggressiveness in the robot's behavior. In addition to this, we also analyzed the impact of other factors such as perceived anthropomorphism, safety, intelligence and responsibility of the robot's behavior on participants' compliance with the robot's instructions. The results suggest that the degree of perceived aggression in the robot's behavior by different participants did not have a significant impact on their decision to follow the robot's instruction. We have provided possible explanations for our findings and identified new research questions that will help to understand the role of robot authority in human-robot interaction, and that can help to guide the design of robots that are required to provide advice and instructions

    Effects of Robot Facial Characteristics and Gender in Persuasive Human-Robot Interaction

    Get PDF
    The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible) gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behavior was logged, and trust and reactance toward the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans) is more persuasive, evokes more trust, and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans). Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest robots that are intended to influence human behavior should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot

    Don’t Touch That Dial: Psychological Reactance, Transparency, And User Acceptance Of Smart Thermostat Setting Changes

    Get PDF
    Automation inherently removes a certain amount of user control. If perceived as a loss of freedom, users may experience psychological reactance, which is a motivational state that can lead a person to engage in behaviors to reassert their freedom. In an online experiment, participants set up and communicated with a hypothetical smart thermostat. Participants read notifications about a change in the thermostat\u27s setting. Phrasing of notifications was altered across three dimensions: strength of authoritative language, deviation of temperature change from preferences, and whether or not the reason for the change was transparent. Authoritative language, temperatures outside the user\u27s preferences, and lack of transparency induced significantly higher levels of reactance. However, when the system presented a temperature change outside of the user\u27s preferences, reactance was mitigated and user acceptance was higher if the thermostat\u27s operations were transparent. Providing justification may be less likely to induce psychological reactance and increase user acceptance. This supports efforts to use behavioral approaches, such as demand response, to increase sustainability and limit the impacts of climate change

    AI and Gender in Persuasion: Using Chatbots to Prevent Driving Under The Influence of Marijuana

    Get PDF
    Will new media techniques, such as artificial intelligence (AI), help refresh public safety advertising campaigns and help better target specific populations, and aid in persuasive, preventative marketing? This paper used hypocrisy induction as a persuasive tool for standalone artificial intelligence chatbots to test potential behavioral change in the context of marijuana. This research further tested whether the chatbots\u27 gender and language styles impact how persuasive and effective the chat agents are perceived to be using hypocrisy induction. An online experiment conducted with 705 participants (Mage = 42.9, 392 women). where participants interact with a chatbot that is manipulated as male/female and uses formal/causal language. Half of the participants received the hypocrisy induction manipulation. hypocrisy induction is more effective when chatbot gender and linguistic styles are appropriately paired. Participants in the hypocrisy induction condition exhibited higher WTP than those in the non-hypocrisy induction condition when the chatbot they interacted with was female in gender and used casual language. However, hypocrisy induction increased WTP than those who did not receive the hypocrisy induction manipulation when the gender of the chatbot they interacted with was male and used formal language. To the researchers\u27 knowledge, this is among the first studies testing the persuasive power of hypocrisy induction using new media platforms in public safety and health advertising in marijuana studies. Findings not only help to shed light on the persuasiveness of gender and language in standalone chatbots but also provide practical implications for practitioners on the future usage of chatbots

    The role of psychological reactance in smart home energy management systems

    Get PDF
    “With an ever-growing demand for energy, our increasing consumption is producing more greenhouse gases and other pollutants, impacting climate change. One approach to reducing residential energy consumption is through the use of smart energy management systems. However, automation from smart technology inherently removes a certain amount of control from the user. If loss of control is perceived as a loss of freedom, this may lead users to experience psychological reactance when using these products. A set of experiments was conducted to assess how three features of a message notification from smart home energy management systems may induce reactance in users. In the context of a hypothetical smart thermostat, the participants responded to message notifications. The phrasing of the notification was altered depending on the assigned strength of language, type of temperature change, and justification given by the smart thermostat. Reactance was measured after exposure to the notification. Results indicated more authoritative language, temperatures outside the user’s comfort range, and a lack of justification from the thermostat had a significant effect on inducing reactance. Evidence suggested the presence of justification for the thermostat’s operations may have caused users to be more likely to accept the thermostat’s temperature change, even if that temperature was outside user preferences. This study has implications for designing smart home energy management systems to increase user acceptance and decrease potential frustrations”--Abstract, page iii
    • …
    corecore