353 research outputs found
Understanding the Impact that Response Failure has on How Users Perceive Anthropomorphic Conversational Service Agents: Insights from an Online Experiment
Conversational agents (CAs) have attracted the interest from organizations due to their potential to provide automated services and the feeling of humanlike interaction. Emerging studies on CAs have found that humanness has a positive impact on customer perception and explored approaches for their anthropomorphic design, which comprises both their appearance and behavior. While these studies provide valuable knowledge on how to design humanlike CAs, we still do not sufficiently understand this technology’s limited conversational capabilities and their potentially detrimental impact on user perception. These limitations often lead to frustrated users and discontinued CAs in practice. We address this gap by investigating the impact of response failure, which we understand a CA’s inability to provide a meaningful reply, in a service context. To do so, we draw on the computers are social actors paradigm and the theory of the uncanny valley. Via an experiment with 169 participants, we found that 1) response failure harmed the extent to which people perceived CAs as human and increased their feelings of uncanniness, 2) humanness (uncanniness) positively (negatively) influenced familiarity and service satisfaction, and 3) the response failure had a significant negative impact on user perception yet did not lead to a sharp drop as the uncanny valley theory posits. Thus, our study contributes to better explaining the impact that text-based CAs’ failure to respond has on customer perception and satisfaction in a service context in relation to the agents’ design
Towards the techno-social Uncanny
This paper explores a technical unfinished half-method [Halbzeug] of a metaphorology (Blumenberg) of the technological other in its variations and the philosophical mise-en-scène of the techno-social uncanny. The roboticist Mori had revived the concept of a technological uncanny in human machine interaction in the spatial metaphor derived from a diagram of an uncanny valley in the reaction of a human being shaking an artificial hand in order to show why we feel a certain eeriness in relation to technological artefacts, a topic that gains importance today to reflect human technological automata relations with robots/AI/Avatars that mimic and socially resonate with humans and may even drive further technological transhumanism. Although in an artefact design approach uncanniness is said to be avoided in the human-like automaton-human encounter this paper dwells on the critic of techno-social otherness avoidance by technological overcoming of obstacles and thus argues for a cybernetic uncanny that can’t be avoided. This paper introduces in a broader sense than Mori’s a philosophical dramaturgy of Emmanuel Levinas’ temporal notion of the relation to the other, including a preliminary metaphorological variation of the temporal techno-social uncanny.info:eu-repo/semantics/publishedVersio
Designing Anthropomorphic Enterprise Conversational Agents
The increasing capabilities of conversationalagents (CAs) offer manifold opportunities to assist users ina variety of tasks. In an organizational context, particularlytheir potential to simulate a human-like interaction vianatural language currently attracts attention both at thecustomer interface as well as for internal purposes, often inthe form of chatbots. Emerging experimental studies onCAs look into the impact of anthropomorphic design ele-ments, so-called social cues, on user perception. However,while these studies provide valuable prescriptive knowl-edge of selected social cues, they neglect the potentialdetrimental influence of the limited responsiveness ofpresent-day conversational agents. In practice, many CAsfail to continuously provide meaningful responses in aconversation due to the open nature of natural languageinteraction, which negatively influences user perceptionand often led to CAs being discontinued in the past. Thus,designing a CA that provides a human-like interactionexperience while minimizing the risks associated withlimited conversational capabilities represents a substantialdesign problem. This study addresses the aforementionedproblem by proposing and evaluating a design for a CAthat offers a human-like interaction experience while mit-igating negative effects due to limited responsiveness.Through the presentation of the artifact and the synthesis ofprescriptive knowledge in the form of a nascent designtheory for anthropomorphic enterprise CAs, this researchadds to the growing knowledge base for designing human-like assistants and supports practitioners seeking to intro-duce them into their organizations
Recommended from our members
The Uncanny Valley Effect
The Uncanny Valley Effect (UVE) first emerged as a warning against making industrial robots appear so highly human-like that they could unsettle the real humans around them. It proposed a specific pattern of negative emotional responses to entities that were almost but not quite human, and has been proposed as the reason why some entities such as dolls, mannequins and zombies may appear unsettling.
The aim of this thesis was to move beyond an anecdotal explanation to understand more about the perception of near-human faces, and how this compares to the perception of human and non-human faces. The aims were to explore the relationship between the human-likeness of faces and emotional responses to them, to understand reactions to and descriptions of near-human faces, to explore aspects of how near-human faces are processed and to explore whether mismatched emotional expressions might contribute to the perception of some near-human faces as eerie.
Five studies were carried out using face images whose human-likeness was systematically controlled or measured. A non-linear relationship between human-likeness and eeriness was found, but the near-human faces were not always the eeriest images. Near-human faces were found to be subject to the effects of inversion, and inversion was found to heighten perceptions of eeriness. Faces were created which contained mismatched emotional expressions, and the blends combining happy faces with angry or fearful eyes were rated as the most eerie. Incongruities between aspects of appearance or behaviour had been cited as explanations for the UVE in the past but this thesis presents the first evidence that differences in eeriness may result from incongruities between emotional expressions. Directions for future research have been suggested to explore these findings in a wider context and to understand more about the UVE
Addressing joint action challenges in HRI: Insights from psychology and philosophy
The vast expansion of research in human-robot interactions (HRI) these last decades has been accompanied by
the design of increasingly skilled robots for engaging in joint actions with humans. However, these advances
have encountered significant challenges to ensure fluent interactions and sustain human motivation through the
different steps of joint action. After exploring current literature on joint action in HRI, leading to a more precise
definition of these challenges, the present article proposes some perspectives borrowed from psychology and
philosophy showing the key role of communication in human interactions. From mutual recognition between
individuals to the expression of commitment and social expectations, we argue that communicative cues can
facilitate coordination, prediction, and motivation in the context of joint action. The description of several notions
thus suggests that some communicative capacities can be implemented in the context of joint action for
HRI, leading to an integrated perspective of robotic communication.French National Research Agency (ANR) ANR-16-CE33-0017
ANR-17-EURE-0017 FrontCog
ANR-10-IDEX-0001-02 PSLJuan de la Cierva-Incorporacion grant IJC2019-040199-ISpanish Government PID2019-108870GB-I00
PID2019-109764RB-I0
At the fringes of normality – a neurocognitive model of the uncanny valley on the detection and negative evaluation of deviations
Information violating preconceived patterns tend to be disliked. The term “uncanny valley” is
used to described such negative reactions towards near humanlike artificial agents as a
nonlinear function of human likeness and likability. My work proposes and investigates a
new neurocognitive theory of the uncanny valley and uncanniness effects within various
categories. According to this refined theory of the uncanny valley, the degree of perceptual
specialization increases the sensitivity to anomalies or deviations in a stimulus, which leads
to a greater relative negative evaluation. As perceptual specialization is observed for many
human-related stimuli (e.g., faces, voices, bodies, biological motion) attempts to replicate
artificial human entities may lead to design errors which would be especially apparent due to
a higher level of specialization, leading to the uncanny valley. The refined theory is
established and investigated throughout 10 chapters. In Chapters 2 to 4, the correlative
(Chapters 2 and 3) and causal (Chapter 4) association between perceptual specialization,
sensitivity to deviations, and uncanniness are observed. In Chapters 5 to 6, the refined theory
is applied to inanimate object categories to validate its relevance in stimulus categories
beyond those associated with the uncanny valley, specifically written text (Chapter 5) and
physical places (Chapter 6). Chapters 7 to 10 critically investigate multiple explanations on
the uncanny valley, including the refined theory. Chapter 11 applies the refined theory onto
ecologically valid stimuli of the uncanny valley, namely an android’s dynamic emotional
expressions. Finally, Chapter 12 summarized and discusses the findings and evaluates the
refined theory of the uncanny based on its advantages and disadvantages. With this work, I
hope to present substantial arguments for an alternative, refined theory of the uncanny that
can more accurately explain a wider range of observation compared to the uncanny valley
Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)
With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies
Robot's Gendering Trouble: A Scoping Review of Gendering Humanoid Robots and its Effects on HRI
The discussion around the problematic practice of gendering humanoid robots
has risen to the foreground in the last few years. To lay the basis for a
thorough understanding of how robot's "gender" has been understood within the
Human-Robot Interaction (HRI) community - i.e., how it has been manipulated, in
which contexts, and which effects it has yield on people's perceptions and
interactions with robots - we performed a scoping review of the literature. We
identified 553 papers relevant for our review retrieved from 5 different
databases. The final sample of reviewed papers included 35 papers written
between 2005 and 2021, which involved a total of 3902 participants. In this
article, we thoroughly summarize these papers by reporting information about
their objectives and assumptions on gender (i.e., definitions and reasons to
manipulate gender), their manipulation of robot's "gender" (i.e., gender cues
and manipulation checks), their experimental designs (e.g., demographics of
participants, employed robots), and their results (i.e., main and interaction
effects). The review reveals that robot's "gender" does not affect crucial
constructs for the HRI, such as likability and acceptance, but rather bears its
strongest effect on stereotyping. We leverage our different epistemological
backgrounds in Social Robotics and Gender Studies to provide a comprehensive
interdisciplinary perspective on the results of the review and suggest ways to
move forward in the field of HRI.Comment: 29 pages, 1 figure, 3 long tables. The present paper has been
submitted for publication to the International Journal of Social Robotics and
is currently under revie
Sensitivity to differences in the motor origin of drawings:from human to robot
This study explores the idea that an observer is sensitive to differences in the static traces of drawings that are due to differences in motor origin. In particular, our aim was to test if an observer is able to discriminate between drawings made by a robot and by a human in the case where the drawings contain salient kinematic cues for discrimination and in the case where the drawings only contain more subtle kinematic cues. We hypothesized that participants would be able to correctly attribute the drawing to a human or a robot origin when salient kinematic cues are present. In addition, our study shows that observers are also able to detect the producer behind the drawings in the absence of these salient kinematic cues. The design was such that in the absence of salient kinematic cues, the drawings are visually very similar, i.e. only differing in subtle kinematic differences. Observers thus had to rely on these subtle kinematic differences in the line trajectories between drawings. However, not only motor origin (human versus robot) but also motor style (natural versus mechanic) plays a role in attributing a drawing to the correct producer, because participants scored less high when the human hand draws in a relatively mechanical way. Overall, this study suggests that observers are sensitive to subtle kinematic differences between visually similar marks in drawings that have a different motor origin. We offer some possible interpretations inspired by the idea of "motor resonance''
- …