50 research outputs found

    Assessing the effect of persuasive robots interactive social cues on users’ psychological reactance, liking, trusting beliefs and compliance

    Get PDF
    Research in the field of social robotics suggests that enhancing social cues in robots can elicit more social responses in users. It is however not clear how users respond socially to persuasive social robots and whether such reactions will be more pronounced when the robots feature more interactive social cues. In the current research, we examine social responses towards persuasive attempts provided by a robot featuring different numbers of interactive social cues. A laboratory experiment assessed participants’ psychological reactance, liking, trusting beliefs and compliance toward a persuasive robot that either presented users with: no interactive social cues (random head movements and random social praises), low number of interactive social cues (head mimicry), or high number of interactive social cues (head mimicry and proper timing for social praise). Results show that a persuasive robot with the highest number of interactive social cues invoked lower reactance and was liked more than the robots in the other two conditions. Furthermore, results suggest that trusting beliefs towards persuasive robots can be enhanced by utilizing praise as presented by social robots in no interactive social cues and high number of interactive social cues conditions. However, interactive social cues did not contribute to higher compliance

    Assessing the effect of persuasive robots interactive social cues on users’ psychological reactance, liking, trusting beliefs and compliance

    Get PDF
    Research in the field of social robotics suggests that enhancing social cues in robots can elicit more social responses in users. It is however not clear how users respond socially to persuasive social robots and whether such reactions will be more pronounced when the robots feature more interactive social cues. In the current research, we examine social responses towards persuasive attempts provided by a robot featuring different numbers of interactive social cues. A laboratory experiment assessed participants’ psychological reactance, liking, trusting beliefs and compliance toward a persuasive robot that either presented users with: no interactive social cues (random head movements and random social praises), low number of interactive social cues (head mimicry), or high number of interactive social cues (head mimicry and proper timing for social praise). Results show that a persuasive robot with the highest number of interactive social cues invoked lower reactance and was liked more than the robots in the other two conditions. Furthermore, results suggest that trusting beliefs towards persuasive robots can be enhanced by utilizing praise as presented by social robots in no interactive social cues and high number of interactive social cues conditions. However, interactive social cues did not contribute to higher compliance

    Effects of Robot Facial Characteristics and Gender in Persuasive Human-Robot Interaction

    Get PDF
    The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible) gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behavior was logged, and trust and reactance toward the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans) is more persuasive, evokes more trust, and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans). Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest robots that are intended to influence human behavior should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot

    Building Persuasive Robots with Social Power Strategies

    Full text link
    Can social power endow social robots with the capacity to persuade? This paper represents our recent endeavor to design persuasive social robots. We have designed and run three different user studies to investigate the effectiveness of different bases of social power (inspired by French and Raven's theory) on peoples' compliance to the requests of social robots. The results show that robotic persuaders that exert social power (specifically from expert, reward, and coercion bases) demonstrate increased ability to influence humans. The first study provides a positive answer and shows that under the same circumstances, people with different personalities prefer robots using a specific social power base. In addition, social rewards can be useful in persuading individuals. The second study suggests that by employing social power, social robots are capable of persuading people objectively to select a less desirable choice among others. Finally, the third study shows that the effect of power on persuasion does not decay over time and might strengthen under specific circumstances. Moreover, exerting stronger social power does not necessarily lead to higher persuasion. Overall, we argue that the results of these studies are relevant for designing human--robot-interaction scenarios especially the ones aiming at behavioral change

    Trusting Intentions Towards Robots in Healthcare: A Theoretical Framework

    Get PDF
    Within the next decade, robots (intelligent agents that are able to perform tasks normally requiring human intelligence) may become more popular when delivering healthcare services to patients. The use of robots in this way may be daunting for some members of the public, who may not understand this technology and deem it untrustworthy. Others may be excited to use and trust robots to support their healthcare needs. It is argued that (1) context plays an integral role in Information Systems (IS) research and (2) technology demonstrating anthropomorphic or system-like features impact the extent to which an individual trusts the technology. Yet, there is little research which integrates these two concepts within one study in healthcare. To address this gap, we develop a theoretical framework that considers trusting intentions towards robots based on the interaction of humans and robots within the contextual landscape of delivering healthcare services. This article presents a theory-based approach to developing effective trustworthy intelligent agents at the intersection of IS and Healthcare

    Can social robots affect children's prosocial behavior? An experimental study on prosocial robot models

    Get PDF
    The aim of this study was to investigate whether a social robot that models prosocial behavior (in terms of giving away stickers) influences the occurrence of prosocial behavior among children as well as the extent to which children behave prosocially. Additionally, we investigated whether the occurrence and extent of children's prosocial behavior changed when being repeated and whether the behavior modeled by the robot affected children's norms of prosocial behavior. In a one-factorial experiment (weakly prosocial robot vs. strongly prosocial robot), 61 children aged 8 to 10 and a social robot alternately played four rounds of a game against a computer and, after each round, could decide to give away stickers. Children who saw a strongly prosocial robot gave away more stickers than children who saw a weakly prosocial robot. A strongly prosocial robot also increased children's perception of how many other children engage in prosocial behavior (i.e., descriptive norms). The strongly prosocial robot affected the occurrence of prosocial behavior only in the first round, whereas its effect on the extent of children's prosocial behavior was most distinct in the last round. Our study suggests that the principles of social learning also apply to whether children learn prosocial behavior from robots

    Exploring Human Compliance Toward a Package Delivery Robot

    Get PDF
    Human-Robot Interaction (HRI) research on combat robots and autonomous carsdemonstrate faulty robots significantly decrease trust. However, HRI studies consistently show people overtrust domestic robots in households, emergency evacuation scenarios, and building security. This thesis presents how two theories, cognitive dissonance and selective attention, confound domestic HRI scenarios and uses the theory to design a novel HRI scenario with a package delivery robot in a public setting. Over 40 undergraduates were recruited within a university library to follow a package delivery robot to three stops, under the guise of “testing its navigation around people.” The second delivery was an open office which appeared private. Without labeling the packages, in 15 trials only 2 individuals entered the room at the second stop, whereas a pair of participants were much more likely to enter the room. Labeling the packages significantly increased the likelihood individuals would enter the office. The third stop was at the end of a long, isolated hallway blocked by a door marked “Emergency Exit Only. Alarm will Sound.” No one seriously thought about opening the door. Nonverbal robot prods such as waiting one minute or nudging the door were perceived as malfunctioning behavior. To demonstrate selective attention, a second route led to an emergency exit door in a public computer lab, with the intended destination an office several feet away. When the robot communicated with beeps only 45% of individuals noticed the emergency exit door. No one noticed the emergency exit door when the robot used speech commands, although its qualitative rating significantly improved. In conclusion, this thesis shows robots must make explicit requests to generate overtrust. Explicit interactions increase participant engagement with the robot, which increases selective attention towards their environment

    Expert-informed design and automation of persuasive, socially assistive robots

    Get PDF
    Socially assistive robots primarily provide useful functionality through their social interactions with user(s). An example application, used to ground work throughout this thesis, is using a social robot to guide users through exercise sessions. Initial works have demonstrated that interactions with a social robot can improve engagement with exercise, and that an embodied social robot is more effective for this than the equivalent virtual avatar. However, many questions remain regarding the design and automation of socially assistive robot behaviours for this purpose. This thesis identifies and practically works through a number of these questions in pursuit of one ultimate goal: the meaningful, real world deployment of a fully autonomous, socially assistive robot. The work takes an expert informed approach, looking to learn from human experts in socially assistive interactions and explore how their expert knowledge can be reflected in the design and automation of social robot behaviours. It is taking this approach that leads to the notion of socially assistive robots needing to be persuasive in order to be effective, but also identifies the difficulty in automating such complex, socially intelligent behaviour. The ethical implications of designing persuasive robot behaviours are also practically considered; with reference to a published standard on ethical robot design. The work culminates with use of a state of the art, interactive machine learning approach to have an expert fitness instructor train a robot ‘fitness coach’, deployed in a university gym, as it guides participants through an NHS exercise programme. After a total of 151 training sessions across 10 participants, the robot successfully ran 32 sessions autonomously. The results demonstrated that autonomous behaviour was generally comparable to that of the robot when controlled/supervised by the fitness instructor, and that overall, the robot played an important role in keeping participants motivated through the exercise programme
    corecore