110 research outputs found

    The distracted robot: what happens when artificial agents behave like us

    Get PDF
    In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits. A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents. Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently

    Applying the “human-dog interaction” metaphor in human-robot interaction: a co-design practice engaging healthy retired adults in China

    Get PDF
    This research adopts a Deweyan pragmatist approach and “research through design” methods to explore the use of human-dog interaction as a model for developing human-robot interaction. This research asks two questions: (1) In what way could the human-dog interaction model inform the design of social robots to meet the needs of older adults? (2) What role could aesthetic, functional and behavioural aspects of the human-dog interaction play in older adults’ interaction with social robots? Driven by the pragmatist approach, this thesis uses the dog-human interaction model as a metaphor in this thesis. The research carried out four studies in two parts. The first part of the practice includes two explorative studies to identify aspects of human-dog interaction that could inform the design of social robots for older adults. Study 1 explores aspects of human-dog interaction that could inform the design of human-robot interaction for retired adults. Study 2 explores a group of healthy retired adults’ attitudes and preferences toward social/assistive robots in China. The findings suggest that, first, the pairing and training process provides a framework for building personalised social robots in terms of form, function, interaction, and stakeholders involved. Second, the cooperative interaction between a human and a guide dog provides insights for building social robots that take on leading roles in interactions. The robot-as-dog metaphor offers a new perspective to rethink the design process of social robots based on the role dog trainer, owner, and the dog plays in human-dog interaction. In the second part of the practice, two more studies are conducted to articulate the usefulness of the designer-as-trainer-metaphor, and the personalisation-astraining-metaphor, using participatory co-designing methods. Engaging both retired adult participants and roboticists as co-designers to investigate further how aesthetic aspects, functional features, and interactive behaviours characterising dog-human interaction could inform how older adults can interact with social robots. Study 3 involved co-designing a robot probe with roboticists and later deploying it in a participant’s home using the Wizard of Oz method. The personalisation-as-training metaphor helps facilitate a critical discussion for the interdisciplinary co-design process. It broadens the design space when addressing the technical limitation of the probe’s camera through reflection-in-action. Study 4 engages the retired adults as co-designers to envision what characteristics they would like robots to have, with attention to the robot’s form, the functions that the robot can perform and how the robot interacts with users. The study applies techniques such as sketching and storyboarding to understand how retired adults make sense of these core elements that are key to developing social/assistive robots for positive ageing. This thesis makes two main contributions to knowledge in human-robot interaction and interaction design research. Firstly, it provides an applied example using the robot-as-dog metaphor as a tool to probe human-robot interactions in a domestic context. Secondly, to show dog-human interaction model is applicable to different levels of abstraction for the co-designing process that involves the roboticists and the end-users. The outcome shows a reflective practice that engages metaphors to facilitate communication across disciplines in the co-design process

    A Softwaremodule for an Ethical Elder Care Robot. Design and Implementation

    Get PDF
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. The goal of this article is to describe how one can approach the design of an elder care robot which is capable of moral decision-making and moral learning. A conceptual design for the development of such a system is provided and the steps that are necessary to implement it are described

    Sustainable Technology and Elderly Life

    Get PDF
    The coming years will see an exponential increase in the proportion of elderly people in our society. This accelerated growth brings with it major challenges in relation to the sustainability of the system. There are different aspects where these changes will have a special incidence: health systems and their monitoring; the development of a framework in which the elderly can develop their daily lives satisfactorily; and in the design of intelligent cities adapted to the future sociodemographic profile. The discussion of the challenges faced, together with the current technological evolution, can show possible ways of meeting the challenges. There are different aspects where these changes will have a special incidence: health systems and their monitoring; the development of a framework in which the elderly can develop their daily lives satisfactorily; and in the design of intelligent cities adapted to the future sociodemographic profile. This special issue discusses various ways in which sustainable technologies can be applied to improve the lives of the elderly. Six articles on the subject are featured in this volume. From a systematic review of the literature to the development of gamification and health improvement projects. The articles present suggestive proposals for the improvement of the lives of the elderly. The volume is a resource of interest for the scientific community, since it shows different research gaps in the current state of the art. But it is also a document that can help social policy makers and people working in this domain to planning successful projects

    The Impact of Social Expectation towards Robots on Human-Robot Interactions

    Get PDF
    This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension. This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle. The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction

    Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    Get PDF
    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    Robot Games for Elderly:A Case-Based Approach

    Get PDF
    • …
    corecore