175,290 research outputs found

    Social robots: The influence of human and robot characteristics on acceptance

    Get PDF
    Research in social robotics is focused on the development of robots that can provide physical and cognitive support in a socially interactive way. Whilst some studies have previously investigated the importance of user characteristics (age, gender, education, robot familiarity, mood) in the acceptance of social robots as well as the influence a robot's displayed emotion (positive, negative, neutral) has on the interaction, these two aspects are rarely combined. Therefore, this study attempts to highlight the need to consider the influence that both human and robot attributes can have on social robot acceptance. Eighty-six participants completed implicit and explicit measures of mood before viewing one of three video clips containing a positive, negative or neutral social robot (Pepper) followed byquestionnaires on robot acceptance and perception. Gender and education were not associated with acceptance; however, several constructs of the acceptance questionnaire significantly correlated with age and mood. For example, those younger and those experiencing sadness or loneliness were more dependent on the opinions of others (as measured by the social influence construct of the acceptance questionnaire). This highlights the importance of mood in the introduction of social robots into vulnerable populations. Robot familiarity also correlated with robot acceptance with those more familiar finding the robot less useful and less enjoyable, this is important as robots become more prominent in society. Displayed robot emotion significantly influenced acceptance and perception with the positive robot appearing more childlike than the negative and neutral robot, and the neutral robot the least helpful. These findings emphasise the importance of both user and robot characteristics in the successful integration of social robots

    Companion robots: the hallucinatory danger of human-robot interactions

    Get PDF
    The advent of the so-called Companion Robots is raising many ethical concerns among scholars and in the public opinion. Focusing mainly on robots caring for the elderly, in this paper we analyze these concerns to distinguish which are directly ascribable to robotic, and which are instead preexistent. One of these is the “deception objection”, namely the ethical unacceptability of deceiving the user about the simulated nature of the robot’s behaviors. We argue on the inconsistency of this charge, as today formulated. After that, we underline the risk, for human-robot interaction, to become a hallucinatory relation where the human would subjectify the robot in a dynamic of meaning-overload. Finally, we analyze the definition of “quasi-other” relating to the notion of “uncanny”. The goal of this paper is to argue that the main concern about Companion Robots is the simulation of a human-like interaction in the absence of an autonomous robotic horizon of meaning. In addition, that absence could lead the human to build a hallucinatory reality based on the relation with the robot

    Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots

    Get PDF
    Social robots are now part of human society, destined for schools, hospitals, and homes to perform a variety of tasks. To engage their human users, social robots must be equipped with the essential social skill of facial expression communication. Yet, even state-of-the-art social robots are limited in this ability because they often rely on a restricted set of facial expressions derived from theory with well-known limitations such as lacking naturalistic dynamics. With no agreed methodology to objectively engineer a broader variance of more psychologically impactful facial expressions into the social robots' repertoire, human-robot interactions remain restricted. Here, we address this generic challenge with new methodologies that can reverse-engineer dynamic facial expressions into a social robot head. Our data-driven, user-centered approach, which combines human perception with psychophysical methods, produced highly recognizable and human-like dynamic facial expressions of the six classic emotions that generally outperformed state-of-art social robot facial expressions. Our data demonstrates the feasibility of our method applied to social robotics and highlights the benefits of using a data-driven approach that puts human users as central to deriving facial expressions for social robots. We also discuss future work to reverse-engineer a wider range of socially relevant facial expressions including conversational messages (e.g., interest, confusion) and personality traits (e.g., trustworthiness, attractiveness). Together, our results highlight the key role that psychology must continue to play in the design of social robots

    Non-human Intention and Meaning-Making: An Ecological Theory

    Get PDF
    © Springer Nature Switzerland AG 2019. The final publication is available at Springer via https://doi.org/10.1007/978-3-319-97550-4_12Social robots have the potential to problematize many attributes that have previously been considered, in philosophical discourse, to be unique to human beings. Thus, if one construes the explicit programming of robots as constituting specific objectives and the overall design and structure of AI as having aims, in the sense of embedded directives, one might conclude that social robots are motivated to fulfil these objectives, and therefore act intentionally towards fulfilling those goals. The purpose of this paper is to consider the impact of this description of social robotics on traditional notions of intention and meaningmaking, and, in particular, to link meaning-making to a social ecology that is being impacted by the presence of social robots. To the extent that intelligent non-human agents are occupying our world alongside us, this paper suggests that there is no benefit in differentiating them from human agents because they are actively changing the context that we share with them, and therefore influencing our meaningmaking like any other agent. This is not suggested as some kind of Turing Test, in which we can no longer differentiate between humans and robots, but rather to observe that the argument in which human agency is defined in terms of free will, motivation, and intention can equally be used as a description of the agency of social robots. Furthermore, all of this occurs within a shared context in which the actions of the human impinge upon the non-human, and vice versa, thereby problematising Anscombe's classic account of intention.Peer reviewedFinal Accepted Versio

    Designing Virtuous Sex Robots

    Get PDF
    We propose that virtue ethics can be used to address ethical issues central to discussions about sex robots. In particular, we argue virtue ethics is well equipped to focus on the implications of sex robots for human moral character. Our evaluation develops in four steps. First, we present virtue ethics as a suitable framework for the evaluation of human–robot relationships. Second, we show the advantages of our virtue ethical account of sex robots by comparing it to current instrumentalist approaches, showing how the former better captures the reciprocal interaction between robots and their users. Third, we examine how a virtue ethical analysis of intimate human–robot relationships could inspire the design of robots that support the cultivation of virtues. We suggest that a sex robot which is equipped with a consent-module could support the cultivation of compassion when used in supervised, therapeutic scenarios. Fourth, we discuss the ethical implications of our analysis for user autonomy and responsibility
    • …
    corecore