4,226 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    In good company? : Perception of movement synchrony of a non-anthropomorphic robot

    Get PDF
    Copyright: © 2015 Lehmann et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot’s likeability and perceived intelligence using an online video survey. The robot used was Care-O-botÂź3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants’ perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot.Peer reviewe

    Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph

    Full text link
    Police SWAT teams and Military Special Forces face mounting pressure and challenges from adversaries that can only be resolved by way of ever more sophisticated inputs into tactical operations. Lethal Autonomy provides constrained military/security forces with a viable option, but only if implementation has got proper empirically supported foundations. Autonomous weapon systems can be designed and developed to conduct ground, air and naval operations. This monograph offers some insights into the challenges of developing legal, reliable and ethical forms of autonomous weapons, that address the gap between Police or Law Enforcement and Military operations that is growing exponentially small. National adversaries are today in many instances hybrid threats, that manifest criminal and military traits, these often require deployment of hybrid-capability autonomous weapons imbued with the capability to taken on both Military and/or Security objectives. The Westgate Terrorist Attack of 21st September 2013 in the Westlands suburb of Nairobi, Kenya is a very clear manifestation of the hybrid combat scenario that required military response and police investigations against a fighting cell of the Somalia based globally networked Al Shabaab terrorist group.Comment: 52 pages, 6 Figures, over 40 references, reviewed by a reade

    The distracted robot: what happens when artificial agents behave like us

    Get PDF
    In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits. A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents. Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently

    Losing Touch:An embodiment perspective on coordination in robotic surgery

    Get PDF
    Because new technologies allow new performances, mediations, representations, and information flows, they are often associated with changes in how coordination is achieved. Current coordination research emphasizes its situated and emergent nature, but seldom accounts for the role of embodied action. Building on a 25-month field study of the da Vinci robot, an endoscopic system for minimally invasive surgery, we bring to the fore the role of the body in how coordination was reconfigured in response to a change in technological mediation. Using the robot, surgeons experienced both an augmentation and a reduction of what they can do with their bodies in terms of haptic, visual, and auditory perception and manipulative dexterity. These bodily augmentations and reductions affected joint task performance and led to coordinative adaptations (e.g., spatial relocating, redistributing tasks, accommodating novel perceptual dependencies, and mounting novel responses) that, over time, resulted in reconfiguration of roles, including expanded occupational knowledge, emergence of new specializations, and shifts in status and boundaries. By emphasizing the importance of the body in coordination, this paper suggests that an embodiment perspective is important for explaining how and why coordination evolves following the introduction of a new technology

    Do Robots Care? Towards an Anthropocentric Framework in the Caring of Frail Individuals through Assistive Technologies

    Get PDF
    As a consequence of modern medicine and modern style of living, two demographic trends, namely longevity and a decline in fertility have greatly increased the aging population. The number of older persons aged 60 years or over is expected to be 1.4 billion by 2030 (World Population Data 2017). This demographic change combined with changes in family structure challenges the future of elderly care, and contributes to grounding a case towards the use of advanced robotics and AI to either integrate or radically replace human-provided services in this field. This paper introduces an anthropocentric framework – as defined by the European Commission in its 2018 Communication on AI – for the care of elderly individuals through assistive robotic technologies. Firstly, the concepts of care and cure are distinguished, followed by a critical analysis of the function of robots in the context of care. The paper continues with an analysis of the aforesaid technologies with the notion of care provided to highlight that machines have the potential to interact and simulate a relationship, but not to establish a real meaningful one with the user. User’s deception and deprivation of a meaningful care-relationship is discussed as a potential risk emerging from an incorrect use of technology in the treatment of fragile individuals, and the fundamental legal principle of human dignity is considered with respect to its potential application and impact on policies in this domain, as an objective criterion that poses limits also to the individual’s freedom of self-determination

    iRobot : conceptualising SERVBOT for humanoid social robots

    Get PDF
    Services are intangible in nature and, as a result, it is often difficult to measure the quality of the service. The service is usually delivered by a human to a human customer and the service literature shows SERVQUAL can be used to measure the quality of the service. However, the use of social robots during the pandemic is speeding up the process of employing social roots in frontline service settings. An extensive review of the literature shows there is a lack of an empirical model to assess the perceived service quality provided by a social robot. Furthermore, the social robot literature highlights key differences between human service and social robots. For example, scholars have highlighted the importance of entertainment and engagement in the adoption of social robots in the service industry. However, it is unclear whether the SERVQUAL dimensions are appropriate to measure social robots’ service quality. This master’s project will conceptualise the SERVBOT model to assess a social robot’s service quality. It identifies reliability, responsiveness, assurance, empathy, and entertainment as the five dimensions of SERVBOT. Further, the research will investigate how these five factors influence emotional and social engagement and intention to use the social robot in a concierge service setting. To conduct the research, a 2 x 1 (CONTROL vs SERVBOT) x (Concierge) between-subject experiment was undertaken and a total of 232 responses were collected for both stages. The results indicate that entertainment has a positive influence on emotional engagement when service is delivered by a human concierge. Further, assurance had a positive influence on social engagement when a human concierge provided the service. When a social robot concierge delivered the service, empathy and entertainment both influenced emotional engagement, and assurance and entertainment impacted social engagement favourably. For both CONTROL (human concierge) and SERVBOT (social robot concierge), emotional and social engagement had a significant influence on intentions to use. This study is the first to propose the SERVBOT model to measure social robots’ service quality. The model provides a theoretical underpinning on the key service quality dimensions of a social robot and gives scholars and managers a method to track the service quality of a social robot. The study also extends the literature by exploring the key factors that influence the use of social robots (i.e., emotional and social engagement)
    • 

    corecore