7 research outputs found

    Ethical implications of artificial expression of emotion by social robots in assistive contexts

    Get PDF
    This research investigated whether Artificial Expression of Emotion (AEE) by a social robot can lead to emotional deception and emotional attachment. The use of AEE can be beneficial, as it may encourage engagement, and can help to build trust in the robot. However, it may also lead to misplaced trust and false expectations of the robot’s abilities, which in turn could lead to mental or physical harm. Even though the literature has raised ethical issues in the form of emotional deception and emotional attachment, research regarding potential negative consequences of these concerns is limited. As such, the impact of AEE was considered in this research. Knowledge on potential negative consequences is essential, as social robots are likely to become increasing prevalent in supporting assistive tasks for people who are vulnerable.The impact of AEE was investigated through surveys, lab-based experiments and longitudinal field studies. Participants’ opinion of the robot, acceptance of the robot and attachment to the robot, and physiological responses to the robot were investigated. Findings indicate that emotional deception and emotional attachment may have occurred, although their impact on users was low. These findings contributed to the development of a framework, that could help developers and producers of future social robots design social robot behaviours, with a view to limit negative consequences where possible.This work contributes to the area of social robot ethics, highlighting issues for socially assistive robots based on findings from a range of user studies. Furthermore, a novel approach for conducting research in determining people’s attitudes and perspectives on ethical issues is presented, which allows people to make more informed decisions while completing surveys. In addition, this research provides rich insights in user experience of socially assistive robots, which contributes to our understanding of the impact that the use of AEE may have on society

    Designing ethical social robots - A longitudinal field study with older adults

    Get PDF
    Emotional deception and emotional attachment are regarded as ethical concerns in human robot interaction. Considering these concerns is essential, particularly as little is known about longitudinal effects of interactions with social robots. We ran a longitudinal user study with older adults in two retirement villages, where people interacted with a robot in a didactic setting for eight sessions over a period of four weeks. The robot would show either non-emotive or emotive behavior during these interactions in order to investigate emotional deception. Questionnaires were given to investigate participants’ acceptance of the robot, perception of the social interactions with the robot and attachment to the robot. Results show that the robot’s behavior did not seem to influence participants’ acceptance of the robot, perception of the interaction or attachment to the robot. Time did not appear to influence participants’ level of attachment to the robot, which ranged from low to medium. The perceived ease of using the robot significantly increased over time. These findings indicate that a robot showing emotions (and perhaps resulting in users being deceived) in a didactic setting may not by default negatively influence participants’ acceptance and perception of the robot, and that older adults may not become distressed if the robot would break or be taken away from them, as attachment to the robot in this didactic setting was not high. However, more research is required as there may be other factors influencing these ethical concerns, and support through other measurements than questionnaires are required to be able to draw conclusions regarding these concerns

    Role-play as responsible robotics: The virtual witness testimony role-play interview for investigating hazardous human-robot interactions

    Get PDF
    The development of responsible robotics requires paying attention to responsibility within the research process in addition to responsibility as the outcome of research. This paper describes the preparation and application of a novel method to explore hazardous human-robot interactions. The Virtual Witness Testimony role-play interview is an approach that enables participants to engage with scenarios in which a human being comes to physical harm whilst a robot is present and may have had a malfunction. Participants decide what actions they would take in the scenario and are encouraged to provide their observations and speculations on what happened. Data collection takes place online, a format that provides convenience as well as a safe space for participants to role play a hazardous encounter with minimal risk of suffering discomfort or distress. We provide a detailed account of how our initial set of Virtual Witness Testimony role-play interviews were conducted and describe the ways in which it proved to be an efficient approach that generated useful findings, and upheld our project commitments to Responsible Research and Innovation. We argue that the Virtual Witness Testimony role-play interview is a flexible and fruitful method that can be adapted to benefit research in human robot interaction and advance responsibility in robotics

    A New Perspective on Robot Ethics through Investigating Human–Robot Interactions with Older Adults

    No full text
    This work explored the use of human–robot interaction research to investigate robot ethics. A longitudinal human–robot interaction study was conducted with self-reported healthy older adults to determine whether expression of artificial emotions by a social robot could result in emotional deception and emotional attachment. The findings from this study have highlighted that currently there appears to be no adequate tools, or the means, to determine the ethical impact and concerns ensuing from long-term interactions between social robots and older adults. This raises the question whether we should continue the fundamental development of social robots if we cannot determine their potential negative impact and whether we should shift our focus to the development of human–robot interaction assessment tools that provide more objective measures of ethical impact

    An Ethical Black Box for Social Robots: a draft Open Standard

    Full text link
    This paper introduces a draft open standard for the robot equivalent of an aircraft flight data recorder, which we call an ethical black box. This is a device, or software module, capable of securely recording operational data (sensor, actuator and control decisions) for a social robot, in order to support the investigation of accidents or near-miss incidents. The open standard, presented as an annex to this paper, is offered as a first draft for discussion within the robot ethics community. Our intention is to publish further drafts following feedback, in the hope that the standard will become a useful reference for social robot designers, operators and robot accident/incident investigators.Comment: Submitted to the International Conference on Robot Ethics and Standards (ICRES 2022
    corecore