49 research outputs found
Humanization of robots: is it really such a good idea?
The aim of this review was to examine the pros and cons of humanizing social robots following a psychological perspective. As such, we had six goals. First, we defined what social robots are. Second, we clarified the meaning of humanizing social robots. Third, we presented the theoretical backgrounds for promoting humanization. Fourth, we conducted a review of empirical results of the positive effects and the negative effects of humanization on humanârobot interaction (HRI). Fifth, we presented some of the political and ethical problems raised by the humanization of social
robots. Lastly, we discussed the overall effects of the humanization of robots in HRI and suggested new avenues of research and development.info:eu-repo/semantics/publishedVersio
Effects of Victim Gendering and Humanness on Peopleâs Responses to the Physical Abuse of Humanlike Agents
With the deployment of robots in public realms, researchers are seeing more cases of abusive disinhibition towards robots. Because robots embody gendered identities, poor navigation of antisocial dynamics may reinforce or exacerbate gender-based marginalization. Consequently, it is essential for robots to recognize and effectively head off abuse.
Given extensions of gendered biases to robotic agents, as well as associations between an agent\u27s human likeness and the experiential capacity attributed to it, we quasi-manipulated the victim\u27s humanness (human vs. robot) and gendering (via the inclusion of stereotypically masculine vs. feminine cues in their presentation) across four video-recorded reproductions of the interaction.
Analysis from 422 participants, each of whom watched one of the four videos, indicates that intensity of emotional distress felt by an observer is associated with their gender identification and support for social stratification, along with the victim\u27s genderingâfurther underscoring the criticality of robots\u27 social intelligence
Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AIâs inherent conscious or moral status
To Affinity and Beyond: Interactive Digital Humans as a Human Computer Interface
The field of human computer interaction is increasingly exploring the use of more natural, human-like user interfaces to build intelligent agents to aid in everyday life. This is coupled with a move to people using ever more realistic avatars to represent themselves in their digital lives. As the ability to produce emotionally engaging digital human representations is only just now becoming technically possible, there is little research into how to approach such tasks. This is due to both technical complexity and operational implementation cost. This is now changing as we are at a nexus point with new approaches, faster graphics processing and enabling new technologies in machine learning and computer vision becoming available. I articulate the issues required for such digital humans to be considered successfully located on the other side of the phenomenon known as the Uncanny Valley. My results show that a complex mix of perceived and contextual aspects affect the sense making on digital humans and highlights previously undocumented effects of interactivity on the affinity. Users are willing to accept digital humans as a new form of user interface and they react to them emotionally in previously unanticipated ways. My research shows that it is possible to build an effective interactive digital human that crosses the Uncanny Valley. I directly explore what is required to build a visually realistic digital human as a primary research question and I explore if such a realistic face provides sufficient benefit to justify the challenges involved in building it. I conducted a Delphi study to inform the research approaches and then produced a complex digital human character based on these insights. This interactive and realistic digital human avatar represents a major technical undertaking involving multiple teams around the world. Finally, I explored a framework for examining the ethical implications and signpost future research areas
At the fringes of normality â a neurocognitive model of the uncanny valley on the detection and negative evaluation of deviations
Information violating preconceived patterns tend to be disliked. The term âuncanny valleyâ is
used to described such negative reactions towards near humanlike artificial agents as a
nonlinear function of human likeness and likability. My work proposes and investigates a
new neurocognitive theory of the uncanny valley and uncanniness effects within various
categories. According to this refined theory of the uncanny valley, the degree of perceptual
specialization increases the sensitivity to anomalies or deviations in a stimulus, which leads
to a greater relative negative evaluation. As perceptual specialization is observed for many
human-related stimuli (e.g., faces, voices, bodies, biological motion) attempts to replicate
artificial human entities may lead to design errors which would be especially apparent due to
a higher level of specialization, leading to the uncanny valley. The refined theory is
established and investigated throughout 10 chapters. In Chapters 2 to 4, the correlative
(Chapters 2 and 3) and causal (Chapter 4) association between perceptual specialization,
sensitivity to deviations, and uncanniness are observed. In Chapters 5 to 6, the refined theory
is applied to inanimate object categories to validate its relevance in stimulus categories
beyond those associated with the uncanny valley, specifically written text (Chapter 5) and
physical places (Chapter 6). Chapters 7 to 10 critically investigate multiple explanations on
the uncanny valley, including the refined theory. Chapter 11 applies the refined theory onto
ecologically valid stimuli of the uncanny valley, namely an androidâs dynamic emotional
expressions. Finally, Chapter 12 summarized and discusses the findings and evaluates the
refined theory of the uncanny based on its advantages and disadvantages. With this work, I
hope to present substantial arguments for an alternative, refined theory of the uncanny that
can more accurately explain a wider range of observation compared to the uncanny valley
Metaphors Matter: Top-Down Effects on Anthropomorphism
Anthropomorphism, or the attribution of human mental states and characteristics to non-human entities, has been widely demonstrated to be cued automatically by certain bottom-up appearance and behavioral features in machines. In this thesis, I argue that the potential for top-down effects to influence anthropomorphism has so far been underexplored. I motivate and then report the results of a new empirical study suggesting that top-down linguistic cues, including anthropomorphic metaphors, personal pronouns, and other grammatical constructions, increase anthropomorphism of a robot. As robots and other machines become more integrated into human society and our daily lives, more thorough understanding of the process of anthropomorphism becomes more critical: the cues that cause it, the human behaviors elicited, the underlying mechanisms in human cognition, and the implications of our influenced thought, talk, and treatment of robots for our social and ethical frameworks. In these regards, as I argue in this thesis and as the results of the new empirical study suggest, the top-down effects matter
Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)
With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect usersâ self-perceptions, perceptions of the technology, how users interact with the technology, and the usersâ performance. Examples include changes in a usersâ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in usersâ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the usersâ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies
Recommended from our members
Development of Human-Computer Interaction for Holographic AIs
Virtual humans and embodied conversational agents play diverse roles in real life, including game characters, chatbots, and teachers. In Augmented Reality (AR), such agents are capable of interacting with the real world. To distinguish between both types of virtual agents, AR agents were conceptually redefined as "holographic Artificial Intelligences (AIs)". Holographic AIs are embodied virtual agents interacting with real objects in Augmented Reality (AR), and can respond to events both in virtual and real environments. This thesis provides a comprehensive investigation into holographic AIs, spanning from their design to their user experience.
The purpose of this thesis is to investigate the creation and use of holographic AIs, by creating specific holographic AIs, and then examining how users perceive such entities in order to contribute to the improvement of the user experience. As a result, this thesis explores the design space for and methods for creating holographic AIs, proposing the novel PICS model which include the dimensions of persona, intelligence, conviviality, and senses.
Following the PICS model, a set of holographic AIs are designed by using a method of semi-automatic reconstruction. An AI that resembles a human being in appearance and behaviour is endowed with multimodal interactions capable of creating the illusion of physicality. The initial proposed model is then refined based on the experience of creation.
Basic body language gestures, such as nodding and opening the arms, are insufficient to engage users, particularly when it comes to intelligent tutoring systems. Therefore, this thesis specifically focuses on an open problem, the generation of re-usable standard instructional gestures. In an experiment, key instructional movements that can be employed by holographic AIs were identified and extracted as animations. The hitherto known range of representational gestures is, epistemologically, further expanded by transformational and imitation gestures, which show how humans manipulate spatio-motor information and characterise posture using hand motion. Therefore, the model can be extended to describe the holographic AIâs behaviour.
Moreover, in order to assess the empirical validity of holographic AIs, this research explores learners' trustworthiness towards this novel technology - as a key criterion for efficacy of this AI approach. Trust and trustworthiness, in terms of holographic AIs, refers to a mindset that aids users in achieving objectives based on good intentions. Young learnersâ perception of trust is largely influenced by affective aspects of trust, determined by how emotionally responsive a holographic AI is.
These findings contribute to the design of personal holographic AIs that can perform a series of meaningful gestures that engage the learnerâs attention for learning, which in turn fosters a reliable and trustworthy relationship. Both experiments are able to extend elements by adding gestures and holistic perception to this model
Robophobia
Robots-machines, algorithms, artificial intelligence-play an increasingly important role in society, often supplementing or even replacing human judgment. Scholars have rightly become concerned with the fairness, accuracy, and humanity of these systems. Indeed, anxiety about machine bias is at a fever pitch. While these concerns are important, they nearly all run in one direction: we worry about robot bias against humans; we rarely worry about human bias against robots.
This is a mistake. Not because robots deserve, in some deontological sense, to be treated fairly-although that may be true-but because our bias against nonhuman deciders is bad for us. For example, it would be a mistake to reject self-driving cars merely because they cause a single fatal accident. Yet all too often this is what we do. We tolerate enormous risk from our fellow humans but almost none from machines. A substantial literature-almost entirely ignored by legal scholars concerned with algorithmic bias-suggests that we routinely prefer worse-performing humans over better-performing robots. We do this on our roads, in our courthouses, in our military, and in our hospitals. Our bias against robots is costly, and it will only get more so as robots become more capable.
This Article catalogs the many different forms of antirobot bias and suggests some reforms to curtail the harmful effects of that bias. The Article\u27s descriptive contribution is to develop a taxonomy of robophobia. Its normative contribution is to offer some reasons to be less biased against robots. The stakes could hardly be higher. We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers
Exploring the User Needs and Experience of the University Guidance Robot
During the orientation week, new students face different challenges that are hectic. International students are especially confused about the new education system and come across many difficulties to overcome these challenges. There are too many information, which can be overwhelming. Not only for internationals, but for Finnish students also face information management issues. Although there are tutors assigned to help and take care of them, the tutors are not available at all times. Thus, we decided to design an interactive university guidance robot that could help the students whenever needed with relevant information.
Our aim was to understand the usersâ expectations and design the guidance robot to provide relevant information. There were also latent user needs and these can vary according to different culture. Thus, we addressed the needs according to Finnish, Chinese and Indian culture and aim to design the robot according to the needs of the target users. In the second phase, we conducted trials with new students to understand the experience of the participants. Moreover, we tried find out what was the preferred tasks among the students.
We used Pepper robot as the platform for guidance robot. According to our research, the new students found the robot useful and it successfully addressed the needs of the participants. Moreover, the university guidance robot evoked experiences like nurture, fellowship, natural/humanlike and playfulness.
In this thesis, we report how we collected the usersâ expectation, analyzed the data to gather design implications, implemented functionalities in the university guidance robot and performed trials