836 research outputs found

    Do We Adopt the Intentional Stance Toward Humanoid Robots?

    Get PDF
    In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others’ behavior with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance toward an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behavior of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased toward the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance toward artificial agents, at least in some contexts

    The distracted robot: what happens when artificial agents behave like us

    Get PDF
    In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits. A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents. Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently

    Metaphors Matter: Top-Down Effects on Anthropomorphism

    Get PDF
    Anthropomorphism, or the attribution of human mental states and characteristics to non-human entities, has been widely demonstrated to be cued automatically by certain bottom-up appearance and behavioral features in machines. In this thesis, I argue that the potential for top-down effects to influence anthropomorphism has so far been underexplored. I motivate and then report the results of a new empirical study suggesting that top-down linguistic cues, including anthropomorphic metaphors, personal pronouns, and other grammatical constructions, increase anthropomorphism of a robot. As robots and other machines become more integrated into human society and our daily lives, more thorough understanding of the process of anthropomorphism becomes more critical: the cues that cause it, the human behaviors elicited, the underlying mechanisms in human cognition, and the implications of our influenced thought, talk, and treatment of robots for our social and ethical frameworks. In these regards, as I argue in this thesis and as the results of the new empirical study suggest, the top-down effects matter

    Robotics in Germany and Japan

    Get PDF
    This book comprehends an intercultural and interdisciplinary framework including current research fields like Roboethics, Hermeneutics of Technologies, Technology Assessment, Robotics in Japanese Popular Culture and Music Robots. Contributions on cultural interrelations, technical visions and essays are rounding out the content of this book

    More Than Machines? The Attribution of (In)Animacy to Robot Technology

    Get PDF
    We know that robots are just machines. Why then do we often talk about them as if they were alive? The author explores this fascinating phenomenon, providing a rich insight into practices of animacy (and inanimacy) attribution to robot technology: from science-fiction to robotics R&D, from science communication to media discourse, and from the theoretical perspectives of STS to the cognitive sciences. Taking an interdisciplinary perspective, and backed by a wealth of empirical material, the author shows how scientists, engineers, journalists - and everyone else - can face the challenge of robot technology appearing "a little bit alive" with a reflexive and yet pragmatic stance

    More Than Machines?

    Get PDF
    We know that robots are just machines. Why then do we often talk about them as if they were alive? Laura Voss explores this fascinating phenomenon, providing a rich insight into practices of animacy (and inanimacy) attribution to robot technology: from science-fiction to robotics R&D, from science communication to media discourse, and from the theoretical perspectives of STS to the cognitive sciences. Taking an interdisciplinary perspective, and backed by a wealth of empirical material, Voss shows how scientists, engineers, journalists - and everyone else - can face the challenge of robot technology appearing »a little bit alive« with a reflexive and yet pragmatic stance

    More Than Machines?

    Get PDF
    We know that robots are just machines. Why then do we often talk about them as if they were alive? Laura Voss explores this fascinating phenomenon, providing a rich insight into practices of animacy (and inanimacy) attribution to robot technology: from science-fiction to robotics R&D, from science communication to media discourse, and from the theoretical perspectives of STS to the cognitive sciences. Taking an interdisciplinary perspective, and backed by a wealth of empirical material, Voss shows how scientists, engineers, journalists - and everyone else - can face the challenge of robot technology appearing »a little bit alive« with a reflexive and yet pragmatic stance
    • …
    corecore