18,216 research outputs found

    Playing Pairs with Pepper

    Full text link
    As robots become increasingly prevalent in almost all areas of society, the factors affecting humans trust in those robots becomes increasingly important. This paper is intended to investigate the factor of robot attributes, looking specifically at the relationship between anthropomorphism and human development of trust. To achieve this, an interaction game, Matching the Pairs, was designed and implemented on two robots of varying levels of anthropomorphism, Pepper and Husky. Participants completed both pre- and post-test questionnaires that were compared and analyzed predominantly with the use of quantitative methods, such as paired sample t-tests. Post-test analyses suggested a positive relationship between trust and anthropomorphism with 80%80\% of participants confirming that the robots' adoption of facial features assisted in establishing trust. The results also indicated a positive relationship between interaction and trust with 90%90\% of participants confirming this for both robots post-testComment: Presented at AI-HRI AAAI-FSS, 2018 (arXiv:1809.06606

    Machinelike or Humanlike? A Literature Review of Anthropomorphism in AI-Enabled Technology

    Get PDF
    Due to the recent proliferation of AI-enabled technology (AIET), the concept of anthropomorphism, human likeness in technology, has increasingly attracted researchers’ attention. Researchers have examined how anthropomorphism influences users’ perception, adoption, and continued use of AIET. However, researchers have yet to agree on how to conceptualize and operationalize anthropomorphism in AIET, which has resulted in inconsistent findings. A comprehensive understanding is thus needed of the current state of research on anthropomorphism in AIET contexts. To conduct an in-depth analysis of the literature on anthropomorphism, we reviewed 35 empirical studies focusing on conceptualizing and operationalizing AIET anthropomorphism, and its antecedents and consequences. Based on our analysis, we discuss potential research gaps and offer directions for future research

    The Usage and Evaluation of Anthropomorphic Form in Robot Design

    Get PDF
    There are numerous examples illustrating the application of human shape in everyday products. Usage of anthropomorphic form has long been a basic design strategy, particularly in the design of intelligent service robots. As such, it is desirable to use anthropomorphic form not only in aesthetic design but also in interaction design. Proceeding from how anthropomorphism in various domains has taken effect on human perception, we assumed that anthropomorphic form used in appearance and interaction design of robots enriches the explanation of its function and creates familiarity with robots. From many cases we have found, misused anthropomorphic form lead to user disappointment or negative impressions on the robot. In order to effectively use anthropomorphic form, it is necessary to measure the similarity of an artifact to the human form (humanness), and then evaluate whether the usage of anthropomorphic form fits the artifact. The goal of this study is to propose a general evaluation framework of anthropomorphic form for robot design. We suggest three major steps for framing the evaluation: 'measuring anthropomorphic form in appearance', 'measuring anthropomorphic form in Human-Robot Interaction', and 'evaluation of accordance of two former measurements'. This evaluation process will endow a robot an amount of humanness in their appearance equivalent to an amount of humanness in interaction ability, and then ultimately facilitate user satisfaction. Keywords: Anthropomorphic Form; Anthropomorphism; Human-Robot Interaction; Humanness; Robot Design</p

    Exploring the relationship between anthropomorphism and theory-of-mind in brain and behaviour

    Get PDF
    The process of understanding the minds of other people, such as their emotions and intentions, is mimicked when individuals try to understand an artificial mind. The assumption is that anthropomorphism, attributing human-like characteristics to non-human agents and objects, is an analogue to theory-of-mind, the ability to infer mental states of other people. Here, we test to what extent these two constructs formally overlap. Specifically, using a multi-method approach, we test if and how anthropomorphism is related to theory-of-mind using brain (Experiment 1) and behavioural (Experiment 2) measures. In a first exploratory experiment, we examine the relationship between dispositional anthropomorphism and activity within the theory-of-mind brain network (n = 108). Results from a Bayesian regression analysis showed no consistent relationship between dispositional anthropomorphism and activity in regions of the theory-of-mind network. In a follow-up, pre-registered experiment, we explored the relationship between theory-of-mind and situational and dispositional anthropomorphism in more depth. Participants (n = 311) watched a short movie while simultaneously completing situational anthropomorphism and theory-of-mind ratings, as well as measures of dispositional anthropomorphism and general theory-of-mind. Only situational anthropomorphism predicted the ability to understand and predict the behaviour of the film's characters. No relationship between situational or dispositional anthropomorphism and general theory-of-mind was observed. Together, these results suggest that while the constructs of anthropomorphism and theory-of-mind might overlap in certain situations, they remain separate and possibly unrelated at the personality level. These findings point to a possible dissociation between brain and behavioural measures when considering the relationship between theory-of-mind and anthropomorphism

    Challenges for an Ontology of Artificial Intelligence

    Get PDF
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be assimilated and regarded as “normal,” and (3) the tendency of human beings to anthropomorphize. This list is not intended as exhaustive, nor is it seen to preclude entirely a clear ontology, however, these challenges are a necessary set of topics for consideration. Each of these factors is seen to present a 'moving target' for discussion, which poses a challenge for both technical specialists and non-practitioners of AI systems development (e.g., philosophers and theologians) to speak meaningfully given that the corpus of AI structures and capabilities evolves at a rapid pace. Finally, we present avenues for moving forward, including opportunities for collaborative synthesis for scholars in philosophy and science

    “I can haz emoshuns?”: understanding anthropomorphosis of cats among internet users

    Get PDF
    The attribution of human-like traits to non-human animals, termed anthropomorphism, can lead to misunderstandings of animal behaviour, which can result in risks to both human and animal wellbeing and welfare. In this paper, we, during an inter-disciplinary collaboration between social computing and animal behaviour researchers, investigated whether a simple image-tagging application could improve the understanding of how people ascribe intentions and emotions to the behaviour of their domestic cats. A web-based application, Tagpuss, was developed to present casual users with photographs drawn from a database of 1631 images of domestic cats and asked them to ascribe an emotion to the cat portrayed in the image. Over five thousand people actively participated in the study in the space of four weeks, generating over 50,000 tags. Results indicate Tagpuss can be used to identify cat behaviours that lay-people find difficult to distinguish. This highlights further expert scientific exploration that focuses on educating cat owners to identify possible problems with their cat’s welfare

    Robot Betrayal: a guide to the ethics of robotic deception

    Get PDF
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use

    Anthropomorphism Index of Mobility for Artificial Hands

    Get PDF
    The increasing development of anthropomorphic artificial hands makes necessary quick metrics that analyze their anthropomorphism. In this study, a human grasp experiment on the most important grasp types was undertaken in order to obtain an Anthropomorphism Index of Mobility (AIM) for artificial hands. The AIM evaluates the topology of the whole hand, joints and degrees of freedom (DoFs), and the possibility to control these DoFs independently. It uses a set of weighting factors, obtained from analysis of human grasping, depending on the relevance of the different groups of DoFs of the hand. The computation of the index is straightforward, making it a useful tool for analyzing new artificial hands in early stages of the design process and for grading human-likeness of existing artificial hands. Thirteen artificial hands, both prosthetic and robotic, were evaluated and compared using the AIM, highlighting the reasons behind their differences. The AIM was also compared with other indexes in the literature with more cumbersome computation, ranking equally different artificial hands. As the index was primarily proposed for prosthetic hands, normally used as nondominant hands in unilateral amputees, the grasp types selected for the human grasp experiment were the most relevant for the human nondominant hand to reinforce bimanual grasping in activities of daily living. However, it was shown that the effect of using the grasping information from the dominant hand is small, indicating that the index is also valid for evaluating the artificial hand as dominant and so being valid for bilateral amputees or robotic hands
    • 

    corecore