782 research outputs found

    Human- or object-like? Cognitive anthropomorphism of humanoid robots

    Get PDF
    Across three experiments (N = 302), we explored whether people cognitively elaborate humanoid robots as human- or object-like. In doing so, we relied on the inversion paradigm, which is an experimental procedure extensively used by cognitive research to investigate the elaboration of social (vs. non-social) stimuli. Overall, mixed-model analyses revealed that full-bodies of humanoid robots were subjected to the inversion effect (body-inversion effect) and, thus, followed a configural processing similar to that activated for human beings. Such a pattern of finding emerged regardless of the similarity of the considered humanoid robots to human beings. That is, it occurred when considering bodies of humanoid robots with medium (Experiment 1), high and low (Experiment 2) levels of human likeness. Instead, Experiment 3 revealed that only faces of humanoid robots with high (vs. low) levels of human likeness were subjected to the inversion effects and, thus, cognitively anthropomorphized. Theoretical and practical implications of these findings for robotic and psychological research are discussed

    Playing Pairs with Pepper

    Full text link
    As robots become increasingly prevalent in almost all areas of society, the factors affecting humans trust in those robots becomes increasingly important. This paper is intended to investigate the factor of robot attributes, looking specifically at the relationship between anthropomorphism and human development of trust. To achieve this, an interaction game, Matching the Pairs, was designed and implemented on two robots of varying levels of anthropomorphism, Pepper and Husky. Participants completed both pre- and post-test questionnaires that were compared and analyzed predominantly with the use of quantitative methods, such as paired sample t-tests. Post-test analyses suggested a positive relationship between trust and anthropomorphism with 80%80\% of participants confirming that the robots' adoption of facial features assisted in establishing trust. The results also indicated a positive relationship between interaction and trust with 90%90\% of participants confirming this for both robots post-testComment: Presented at AI-HRI AAAI-FSS, 2018 (arXiv:1809.06606

    Responses to human-like artificial agents : effects of user and agent characteristics

    Get PDF

    The Ethical Significance of Human Likeness in Robotics and AI

    Get PDF
    A defining goal of research in AI and robotics is to build technical artefacts as substitutes, assistants or enhancements of human action and decision-making. But both in reflection on these technologies and in interaction with the respective technical artefacts, we sometimes encounter certain kinds of human likenesses. To clarify their significance, three aspects are highlighted. First, I will broadly investigate some relations between humans and artificial agents by recalling certain points from the debates on Strong AI, on Turingโ€™s Test, on the concept of autonomy and on anthropomorphism in human-machine interaction. Second, I will argue for the claim that there are no serious ethical issues involved in the theoretical aspects of technological human likeness. Third, I will suggest that although human likeness may not be ethically significant on the philosophical and conceptual levels, strategies to use anthropomorphism in the technological design of human-machine collaborations are ethically significant, because artificial agents are specifically designed to be treated in ways we usually treat humans

    Human-Robot Interaction: Mapping Literature Review and Network Analysis

    Get PDF
    Organizations increasingly adopt social robots as additions to real-life workforces, which requires knowledge of how humans react to and work with robots. The longstanding research on Human-Robot Interaction (HRI) offers relevant insights, but the existing literature reviews are limited in their ability to guide theory development and practitioners in sustainably employing social robots because the reviews lack a systematic synthesis of HRI concepts, relationships, and ensuing effects. This study offers a mapping review of the past ten years of HRI research. With the analysis of 68 peer-reviewed journal articles, we identify shifting foci, for example, towards more application-specific empirical investigations, and the most prominent concepts and relationships investigated in connection with social robots, for example, robot appearance. The results offer Information Systems scholars and practitioners an initial knowledge base and nuanced insights into key predictors and outcome variables that can hinder and foster social robot adoption in the workplace

    Robotic Psychology. What Do We Know about Human-Robot Interaction and What Do We Still Need to Learn?

    Get PDF
    โ€œRobotizationโ€, the integration of robots in human life will change human life drastically. In many situations, such as in the service sector, robots will become an integrative part of our lives. Thus, it is vital to learn from extant research on human-robot interaction (HRI). This article introduces robotic psychology that aims to bridge the gap between humans and robots by providing insights into particularities of HRI. It presents a conceptualization of robotic psychology and provides an overview of research on service-focused human-robot interaction. Theoretical concepts, relevant to understand HRI with are reviewed. Major achievements, shortcomings, and propositions for future research will be discussed

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect usersโ€™ self-perceptions, perceptions of the technology, how users interact with the technology, and the usersโ€™ performance. Examples include changes in a usersโ€™ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in usersโ€™ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the usersโ€™ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    The distracted robot: what happens when artificial agents behave like us

    Get PDF
    In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits. A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents. Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently

    ๋กœ๋ด‡์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ์‚ฌํšŒ์  ํŠน์„ฑ๊ณผ ์ธ๊ฐ„ ์œ ์‚ฌ์„ฑ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์‚ฌํšŒ๊ณผํ•™๋Œ€ํ•™ ์‹ฌ๋ฆฌํ•™๊ณผ, 2021. 2. Sowon Hahn.The present study investigated the role of robotsโ€™ body language on perceptions of social qualities and human-likeness in robots. In experiment 1, videos of a robotโ€™s body language varying in expansiveness were used to evaluate the two aspects. In experiment 2, videos of social interactions containing the body languages in experiment 1 were used to further examine the effects of robotsโ€™ body language on these aspects. Results suggest that a robot conveying open body language are evaluated higher on perceptions of social characteristics and human-likeness compared to a robot with closed body language. These effects were not found in videos of social interactions (experiment 2), which suggests that other features play significant roles in evaluations of a robot. Nonetheless, current research provides evidence of the importance of robotsโ€™ body language in judgments of social characteristics and human-likeness. While measures of social qualities and human-likeness favor robots that convey open body language, post-experiment interviews revealed that participants expect robots to alleviate feelings of loneliness and empathize with them, which require more diverse body language in addition to open body language. Thus, robotic designers are encouraged to develop robots capable of expressing a wider range of motion. By enabling complex movements, more natural communications between humans and robots are possible, which allows humans to consider robots as social partners.๋ณธ ์—ฐ๊ตฌ๋Š” ๋กœ๋ด‡์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ์‚ฌํšŒ์  ํŠน์„ฑ๊ณผ ์ธ๊ฐ„๊ณผ์˜ ์œ ์‚ฌ์„ฑ์— ๋Œ€ํ•œ ์ธ๊ฐ„์˜ ์ธ์‹์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ํƒ์ƒ‰ํ•˜์˜€๋‹ค. ์‹คํ—˜ 1์—์„œ๋Š” ๋กœ๋ด‡์˜ ๊ฐœ๋ฐฉ์  ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ๋ฌ˜์‚ฌ๋œ ์˜์ƒ๊ณผ ํ์‡„์  ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ๋ฌ˜์‚ฌ๋œ ์˜์ƒ์„ ํ†ตํ•ด ์ด๋Ÿฌํ•œ ์„ธ ๊ฐ€์ง€ ์ธก๋ฉด์„ ์‚ดํŽด๋ณด์•˜๋‹ค. ์‹คํ—˜ 2์—์„œ๋Š” ์‹คํ—˜ 1์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ํฌํ•จ๋œ ๋กœ๋ด‡๊ณผ ์‚ฌ๋žŒ ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ ์˜์ƒ์„ ํ™œ์šฉํ•˜์—ฌ ๋กœ๋ด‡์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ์œ„ ๋‘ ๊ฐ€์ง€ ์ธก๋ฉด์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ํƒ์ƒ‰ํ•˜์˜€๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, ์‚ฌ๋žŒ๋“ค์€ ํ์‡„์  ์‹ ์ฒด ์–ธ์–ด๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๋กœ๋ด‡์— ๋น„ํ•ด ๊ฐœ๋ฐฉ์  ์‹ ์ฒด ์–ธ์–ด๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๋กœ๋ด‡์„ ์‚ฌํšŒ์  ํŠน์„ฑ๊ณผ ์ธ๊ฐ„๊ณผ์˜ ์œ ์‚ฌ์„ฑ์— ๋Œ€ํ•œ ์ธ์‹ ๋ฉด์—์„œ ๋” ๋†’๊ฒŒ ํ‰๊ฐ€ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‚ฌ๋žŒ๊ณผ์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ๋‹ด์€ ์˜์ƒ์„ ํ†ตํ•ด์„œ๋Š” ์ด๋Ÿฌํ•œ ํšจ๊ณผ๊ฐ€ ๋ฐœ๊ฒฌ๋˜์ง€ ์•Š์•˜์œผ๋ฉฐ, ์ด๋Š” ์‹คํ—˜ 2์— ํฌํ•จ๋œ ์Œ์„ฑ ๋“ฑ์˜ ๋‹ค๋ฅธ ํŠน์ง•์ด ๋กœ๋ด‡์— ๋Œ€ํ•œ ํ‰๊ฐ€์— ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์‹œ์‚ฌํ•œ๋‹ค. ๊ทธ๋Ÿผ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ๋ณธ ์—ฐ๊ตฌ๋Š” ๋กœ๋ด‡์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ์‚ฌํšŒ์  ํŠน์„ฑ ๋ฐ ์ธ๊ฐ„๊ณผ์˜ ์œ ์‚ฌ์„ฑ์— ๋Œ€ํ•œ ์ธ์‹์˜ ์ค‘์š”ํ•œ ์š”์ธ์ด ๋œ๋‹ค๋Š” ๊ทผ๊ฑฐ๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์‚ฌํšŒ์  ํŠน์„ฑ๊ณผ ์ธ๊ฐ„๊ณผ์˜ ์œ ์‚ฌ์„ฑ์˜ ์ฒ™๋„์—์„œ๋Š” ๊ฐœ๋ฐฉ์  ์‹ ์ฒด ์–ธ์–ด๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๋กœ๋ด‡์ด ๋” ๋†’๊ฒŒ ํ‰๊ฐ€๋˜์—ˆ์ง€๋งŒ, ์‹คํ—˜ ํ›„ ์ธํ„ฐ๋ทฐ์—์„œ๋Š” ๋กœ๋ด‡์ด ์™ธ๋กœ์šด ๊ฐ์ •์„ ์™„ํ™”ํ•˜๊ณ  ๊ณต๊ฐํ•˜๊ธฐ๋ฅผ ๊ธฐ๋Œ€ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚˜ ์ด ์ƒํ™ฉ๋“ค์— ์ ์ ˆํ•œ ํ์‡„์  ์‹ ์ฒด ์–ธ์–ด ๋˜ํ•œ ๋ฐฐ์ œํ•  ์ˆ˜ ์—†๋‹ค๊ณ  ํ•ด์„ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋”ฐ๋ผ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋กœ๋ด‡ ๋””์ž์ด๋„ˆ๋“ค์ด ๋”์šฑ ๋‹ค์–‘ํ•œ ๋ฒ”์œ„์˜ ์›€์ง์ž„์„ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋Š” ๋กœ๋ด‡์„ ๊ฐœ๋ฐœํ•˜๋„๋ก ์žฅ๋ คํ•œ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉด ์„ฌ์„ธํ•œ ์›€์ง์ž„์— ๋”ฐ๋ฅธ ์ž์—ฐ์Šค๋Ÿฌ์šด ์˜์‚ฌ์†Œํ†ต์„ ํ†ตํ•ด ์ธ๊ฐ„์ด ๋กœ๋ด‡์„ ์‚ฌํšŒ์  ๋™๋ฐ˜์ž๋กœ ์ธ์‹ํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋‹ค.Chapter 1. Introduction 1 1. Motivation 1 2. Theoretical Background and Previous Research 3 3. Purpose of Study 12 Chapter 2. Experiment 1 13 1. Objective and Hypotheses 13 2. Methods 13 3. Results 21 4. Discussion 31 Chapter 3. Experiment 2 34 1. Objective and Hypotheses 34 2. Methods 35 3. Results 38 4. Discussion 50 Chapter 4. Conclusion 52 Chapter 5. General Discussion 54 References 60 Appendix 70 ๊ตญ๋ฌธ์ดˆ๋ก 77Maste
    • โ€ฆ
    corecore