778 research outputs found

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect usersโ€™ self-perceptions, perceptions of the technology, how users interact with the technology, and the usersโ€™ performance. Examples include changes in a usersโ€™ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in usersโ€™ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the usersโ€™ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    Metaphors Matter: Top-Down Effects on Anthropomorphism

    Get PDF
    Anthropomorphism, or the attribution of human mental states and characteristics to non-human entities, has been widely demonstrated to be cued automatically by certain bottom-up appearance and behavioral features in machines. In this thesis, I argue that the potential for top-down effects to influence anthropomorphism has so far been underexplored. I motivate and then report the results of a new empirical study suggesting that top-down linguistic cues, including anthropomorphic metaphors, personal pronouns, and other grammatical constructions, increase anthropomorphism of a robot. As robots and other machines become more integrated into human society and our daily lives, more thorough understanding of the process of anthropomorphism becomes more critical: the cues that cause it, the human behaviors elicited, the underlying mechanisms in human cognition, and the implications of our influenced thought, talk, and treatment of robots for our social and ethical frameworks. In these regards, as I argue in this thesis and as the results of the new empirical study suggest, the top-down effects matter

    WALL-E: A Robot That Reminds Us About Being Human

    Get PDF
    Ecological themes have in recent years started to become part of cinematic production, especially in the genre of animated film which serves as a platform for creating alternative realities to depict a reflection of the reality we live in. Disney and Pixar's 2008 animated movie WALL-E is the perfect example of such an approach to representing the Anthropocene. The movie is set in a post-apocalyptic future which is a consequence of the human obsession with consumption and technology. Despite having survived the apocalypse and continuing its way of living in a โ€œconsumer paradiseโ€ aboard a spaceship, the human race has deteriorated mentally and physically, to a point where humans exist almost as machines. In contrast, a nonhuman machine, the robot WALL-E, is given agency and human-like qualities, in order to awaken humans from their state of dependence on consumption and lead them back to a life in harmony with nature. By juxtaposing a non-human agent and non-reactive humans, the movie opens a critical perspective of the human treatment of their environment in the capitalist system and points to the most pressing issues of the Anthropocene, while at the same time offering a solution in renouncing anthropocentric ignorance and resisting the hegemonic consumer systems in place

    BIASeD: Bringing Irrationality into Automated System Design

    Get PDF
    Human perception, memory and decision-making are impacted by tens of cognitive biases and heuristics that influence our actions and decisions. Despite the pervasiveness of such biases, they are generally not leveraged by today's Artificial Intelligence (AI) systems that model human behavior and interact with humans. In this theoretical paper, we claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases. We propose the need for a research agenda on the interplay between human cognitive biases and Artificial Intelligence. We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.Comment: 14 pages, 1 figure; Accepted for presentation at the AAAI Fall Symposium 2022 on Thinking Fast and Slow and Other Cognitive Theories in A

    BIASeD: Bringing Irrationality into Automated System Design

    Get PDF
    Human perception, memory and decision-making are impacted by tens of cognitive biases and heuristics that influence our actions and decisions. Despite the pervasiveness of such biases, they are generally not leveraged by todayโ€™s Artificial Intelligence (AI) systems that model human behavior and interact with humans. In this theoretical paper, we claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases. We propose the need for a research agenda on the interplay between human cognitive biases and Artificial Intelligence. We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.Aditya Gulati and Nuria Oliver are supported by a nominal grant received at the ELLIS Unit Alicante Foundation from the Regional Government of Valencia in Spain (Convenio Singular signed with Generalitat Valenciana, Conselleria dโ€™Innovaciรณ, Universitats, Ciรจncia i Societat Digital, Direcciรณn General para el Avance de la Sociedad Digital). Aditya Gulati is also supported by a grant by the Banc Sabadell Foundation

    Whatโ€™s In A Name?: Preschoolers Treat A Bug As A Moral Agent When It Has A Proper Name

    Get PDF
    Children encounter anthropomorphized objects daily: in advertisements, media, and books. Past research suggests that features like eyes or displaying intentional, goal-directed behaviors, increases how humanly non-human agents are perceived. When adults and children anthropomorphize, they become more socially connected and empathetic towards those entities. In advertising, this anthropomorphic effect is used to get people to connect with the product. This thesis explores what effect anthropomorphizing might have on preschoolersโ€™ moral reasoning about those entities, and suggest that it increases the likelihood that children will explain non-human agentsโ€™ harmful actions in a moral sense. Specifically, the present study examines the anthropomorphic effect of a proper name on moral reasoning in preschoolers. Four- and 5-year-olds who heard a story about a caterpillar named โ€œPeteโ€ who was killing plants in their garden were more likely than children who heard about a โ€œcaterpillarโ€ to think it was appropriate to squish it. We argue that because children believed Pete could experience the world (e.g., emotions) and had agency (e.g., intentional action) more so than an unnamed caterpillar, then Pete could also be held morally accountable for its harmful actions. A proper name has an interesting effect on preschoolersโ€™ moral reasoning about non-human agents

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any productโ€™s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    A society of mind approach to cognition and metacognition in a cognitive architecture

    Get PDF
    This thesis investigates the concept of mind as a control system using the "Society of Agents" metaphor. "Society of Agents" describes collective behaviours of simple and intelligent agents. "Society of Mind" is more than a collection of task-oriented and deliberative agents; it is a powerful concept for mind research and can benefit from the use of metacognition. The aim is to develop a self configurable computational model using the concept of metacognition. A six tiered SMCA (Society of Mind Cognitive Architecture) control model is designed that relies on a society of agents operating using metrics associated with the principles of artificial economics in animal cognition. This research investigates the concept of metacognition as a powerful catalyst for control, unify and self-reflection. Metacognition is used on BDI models with respect to planning, reasoning, decision making, self reflection, problem solving, learning and the general process of cognition to improve performance.One perspective on how to develop metacognition in a SMCA model is based on the differentiation between metacognitive strategies and metacomponents or metacognitive aids. Metacognitive strategies denote activities such as metacomphrension (remedial action) and metamanagement (self management) and schema training (meaning full learning over cognitive structures). Metacomponents are aids for the representation of thoughts. To develop an efficient, intelligent and optimal agent through the use of metacognition requires the design of a multiple layered control model which includes simple to complex levels of agent action and behaviours. This SMCA model has designed and implemented for six layers which includes reflexive, reactive, deliberative (BDI), learning (Q-Ieamer), metacontrol and metacognition layers

    ๋กœ๋ด‡์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ์‚ฌํšŒ์  ํŠน์„ฑ๊ณผ ์ธ๊ฐ„ ์œ ์‚ฌ์„ฑ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์‚ฌํšŒ๊ณผํ•™๋Œ€ํ•™ ์‹ฌ๋ฆฌํ•™๊ณผ, 2021. 2. Sowon Hahn.The present study investigated the role of robotsโ€™ body language on perceptions of social qualities and human-likeness in robots. In experiment 1, videos of a robotโ€™s body language varying in expansiveness were used to evaluate the two aspects. In experiment 2, videos of social interactions containing the body languages in experiment 1 were used to further examine the effects of robotsโ€™ body language on these aspects. Results suggest that a robot conveying open body language are evaluated higher on perceptions of social characteristics and human-likeness compared to a robot with closed body language. These effects were not found in videos of social interactions (experiment 2), which suggests that other features play significant roles in evaluations of a robot. Nonetheless, current research provides evidence of the importance of robotsโ€™ body language in judgments of social characteristics and human-likeness. While measures of social qualities and human-likeness favor robots that convey open body language, post-experiment interviews revealed that participants expect robots to alleviate feelings of loneliness and empathize with them, which require more diverse body language in addition to open body language. Thus, robotic designers are encouraged to develop robots capable of expressing a wider range of motion. By enabling complex movements, more natural communications between humans and robots are possible, which allows humans to consider robots as social partners.๋ณธ ์—ฐ๊ตฌ๋Š” ๋กœ๋ด‡์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ์‚ฌํšŒ์  ํŠน์„ฑ๊ณผ ์ธ๊ฐ„๊ณผ์˜ ์œ ์‚ฌ์„ฑ์— ๋Œ€ํ•œ ์ธ๊ฐ„์˜ ์ธ์‹์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ํƒ์ƒ‰ํ•˜์˜€๋‹ค. ์‹คํ—˜ 1์—์„œ๋Š” ๋กœ๋ด‡์˜ ๊ฐœ๋ฐฉ์  ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ๋ฌ˜์‚ฌ๋œ ์˜์ƒ๊ณผ ํ์‡„์  ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ๋ฌ˜์‚ฌ๋œ ์˜์ƒ์„ ํ†ตํ•ด ์ด๋Ÿฌํ•œ ์„ธ ๊ฐ€์ง€ ์ธก๋ฉด์„ ์‚ดํŽด๋ณด์•˜๋‹ค. ์‹คํ—˜ 2์—์„œ๋Š” ์‹คํ—˜ 1์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ํฌํ•จ๋œ ๋กœ๋ด‡๊ณผ ์‚ฌ๋žŒ ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ ์˜์ƒ์„ ํ™œ์šฉํ•˜์—ฌ ๋กœ๋ด‡์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ์œ„ ๋‘ ๊ฐ€์ง€ ์ธก๋ฉด์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ํƒ์ƒ‰ํ•˜์˜€๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, ์‚ฌ๋žŒ๋“ค์€ ํ์‡„์  ์‹ ์ฒด ์–ธ์–ด๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๋กœ๋ด‡์— ๋น„ํ•ด ๊ฐœ๋ฐฉ์  ์‹ ์ฒด ์–ธ์–ด๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๋กœ๋ด‡์„ ์‚ฌํšŒ์  ํŠน์„ฑ๊ณผ ์ธ๊ฐ„๊ณผ์˜ ์œ ์‚ฌ์„ฑ์— ๋Œ€ํ•œ ์ธ์‹ ๋ฉด์—์„œ ๋” ๋†’๊ฒŒ ํ‰๊ฐ€ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‚ฌ๋žŒ๊ณผ์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ๋‹ด์€ ์˜์ƒ์„ ํ†ตํ•ด์„œ๋Š” ์ด๋Ÿฌํ•œ ํšจ๊ณผ๊ฐ€ ๋ฐœ๊ฒฌ๋˜์ง€ ์•Š์•˜์œผ๋ฉฐ, ์ด๋Š” ์‹คํ—˜ 2์— ํฌํ•จ๋œ ์Œ์„ฑ ๋“ฑ์˜ ๋‹ค๋ฅธ ํŠน์ง•์ด ๋กœ๋ด‡์— ๋Œ€ํ•œ ํ‰๊ฐ€์— ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์‹œ์‚ฌํ•œ๋‹ค. ๊ทธ๋Ÿผ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ๋ณธ ์—ฐ๊ตฌ๋Š” ๋กœ๋ด‡์˜ ์‹ ์ฒด ์–ธ์–ด๊ฐ€ ์‚ฌํšŒ์  ํŠน์„ฑ ๋ฐ ์ธ๊ฐ„๊ณผ์˜ ์œ ์‚ฌ์„ฑ์— ๋Œ€ํ•œ ์ธ์‹์˜ ์ค‘์š”ํ•œ ์š”์ธ์ด ๋œ๋‹ค๋Š” ๊ทผ๊ฑฐ๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์‚ฌํšŒ์  ํŠน์„ฑ๊ณผ ์ธ๊ฐ„๊ณผ์˜ ์œ ์‚ฌ์„ฑ์˜ ์ฒ™๋„์—์„œ๋Š” ๊ฐœ๋ฐฉ์  ์‹ ์ฒด ์–ธ์–ด๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๋กœ๋ด‡์ด ๋” ๋†’๊ฒŒ ํ‰๊ฐ€๋˜์—ˆ์ง€๋งŒ, ์‹คํ—˜ ํ›„ ์ธํ„ฐ๋ทฐ์—์„œ๋Š” ๋กœ๋ด‡์ด ์™ธ๋กœ์šด ๊ฐ์ •์„ ์™„ํ™”ํ•˜๊ณ  ๊ณต๊ฐํ•˜๊ธฐ๋ฅผ ๊ธฐ๋Œ€ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚˜ ์ด ์ƒํ™ฉ๋“ค์— ์ ์ ˆํ•œ ํ์‡„์  ์‹ ์ฒด ์–ธ์–ด ๋˜ํ•œ ๋ฐฐ์ œํ•  ์ˆ˜ ์—†๋‹ค๊ณ  ํ•ด์„ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋”ฐ๋ผ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋กœ๋ด‡ ๋””์ž์ด๋„ˆ๋“ค์ด ๋”์šฑ ๋‹ค์–‘ํ•œ ๋ฒ”์œ„์˜ ์›€์ง์ž„์„ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋Š” ๋กœ๋ด‡์„ ๊ฐœ๋ฐœํ•˜๋„๋ก ์žฅ๋ คํ•œ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉด ์„ฌ์„ธํ•œ ์›€์ง์ž„์— ๋”ฐ๋ฅธ ์ž์—ฐ์Šค๋Ÿฌ์šด ์˜์‚ฌ์†Œํ†ต์„ ํ†ตํ•ด ์ธ๊ฐ„์ด ๋กœ๋ด‡์„ ์‚ฌํšŒ์  ๋™๋ฐ˜์ž๋กœ ์ธ์‹ํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋‹ค.Chapter 1. Introduction 1 1. Motivation 1 2. Theoretical Background and Previous Research 3 3. Purpose of Study 12 Chapter 2. Experiment 1 13 1. Objective and Hypotheses 13 2. Methods 13 3. Results 21 4. Discussion 31 Chapter 3. Experiment 2 34 1. Objective and Hypotheses 34 2. Methods 35 3. Results 38 4. Discussion 50 Chapter 4. Conclusion 52 Chapter 5. General Discussion 54 References 60 Appendix 70 ๊ตญ๋ฌธ์ดˆ๋ก 77Maste
    • โ€ฆ
    corecore