215 research outputs found

    Anthropomorphism Index of Mobility for Artificial Hands

    Get PDF
    The increasing development of anthropomorphic artificial hands makes necessary quick metrics that analyze their anthropomorphism. In this study, a human grasp experiment on the most important grasp types was undertaken in order to obtain an Anthropomorphism Index of Mobility (AIM) for artificial hands. The AIM evaluates the topology of the whole hand, joints and degrees of freedom (DoFs), and the possibility to control these DoFs independently. It uses a set of weighting factors, obtained from analysis of human grasping, depending on the relevance of the different groups of DoFs of the hand. The computation of the index is straightforward, making it a useful tool for analyzing new artificial hands in early stages of the design process and for grading human-likeness of existing artificial hands. Thirteen artificial hands, both prosthetic and robotic, were evaluated and compared using the AIM, highlighting the reasons behind their differences. The AIM was also compared with other indexes in the literature with more cumbersome computation, ranking equally different artificial hands. As the index was primarily proposed for prosthetic hands, normally used as nondominant hands in unilateral amputees, the grasp types selected for the human grasp experiment were the most relevant for the human nondominant hand to reinforce bimanual grasping in activities of daily living. However, it was shown that the effect of using the grasping information from the dominant hand is small, indicating that the index is also valid for evaluating the artificial hand as dominant and so being valid for bilateral amputees or robotic hands

    Would You Obey an Aggressive Robot: A Human-Robot Interaction Field Study

    Full text link
    © 2018 IEEE. Social Robots have the potential to be of tremendous utility in healthcare, search and rescue, surveillance, transport, and military applications. In many of these applications, social robots need to advise and direct humans to follow important instructions. In this paper, we present the results of a Human-Robot Interaction field experiment conducted using a PR2 robot to explore key factors involved in obedience of humans to social robots. This paper focuses on studying how the human degree of obedience to a robot's instructions is related to the perceived aggression and authority of the robot's behavior. We implemented several social cues to exhibit and convey both authority and aggressiveness in the robot's behavior. In addition to this, we also analyzed the impact of other factors such as perceived anthropomorphism, safety, intelligence and responsibility of the robot's behavior on participants' compliance with the robot's instructions. The results suggest that the degree of perceived aggression in the robot's behavior by different participants did not have a significant impact on their decision to follow the robot's instruction. We have provided possible explanations for our findings and identified new research questions that will help to understand the role of robot authority in human-robot interaction, and that can help to guide the design of robots that are required to provide advice and instructions

    Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots

    Get PDF
    In an era of diminishing privacy, the Internet of Things ( loT ) has become a consensual and inadvertent tool that undermines privacy protection. The loT, really systems of networks connected to each other by the Internet or other radio-type device, creates consensual mass self-surveillance in such domains as fitness and the Fitbit, health care and heart monitors, smart houses and cars, and even smart cities. The multiple networks also have created a degree of interconnectivity that has opened up a fire hose of information for companies and governments alike, as well as making it virtually insuperable to live off the grid in the modem era. This treasure trove of information allows for government tracking in unprecedented ways. This Article explores the influence of the JoT, the mass self-surveillance it produces on privacy, and the new shapes of privacy that are emerging as a result. This Article offers several forms of protection against the further dissipation of privacy

    Responses to human-like artificial agents : effects of user and agent characteristics

    Get PDF

    The distracted robot: what happens when artificial agents behave like us

    Get PDF
    In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits. A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents. Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently

    A Value-Sensitive Design Approach to Intelligent Agents

    Get PDF
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents
    • …
    corecore