197 research outputs found
Quantifying the Human Likeness of a Humanoid Robot
In research of human-robot interactions, human likeness (HL) of robots is frequently used as an individual, vague parameter to describe how a robot is perceived by a human. However, such a simplification of HL is often not sufficient given the complexity and multidimensionality of human-robot interaction. Therefore, HL must be seen as a variable influenced by a network of parameter fields. The first goal of this paper is to introduce such a network which systematically characterizes all relevant aspects of HL. The network is subdivided into ten parameter fields, five describing static aspects of appearance and five describing dynamic aspects of behavior. The second goal of this paper is to propose a methodology to quantify the impact of single or multiple parameters out of these fields on perceived HL. Prior to quantification, the minimal perceivable difference, i.e. the threshold of perception, is determined for the parameters of interest in a first experiment. Thereafter, these parameters are modified in whole-number multiple of the threshold of perception to investigate their influence on perceived HL in a second experiment. This methodology was illustrated on the parameters speed and sequencing (onset of joint movements) of the parameter field movement as well as on the parameter sound. Results revealed that the perceived HL is more sensitive to changes in sequencing than to changes in speed. The sound of the motors during the movement also reduced perceived HL. The presented methodology should guide further, systematic explorations of the proposed network of HL parameters in order to determine and optimize acceptance of humanoid robot
Quantifying the Human Likeness of a Humanoid Robot
In research of human-robot interactions, human likeness (HL) of robots is frequently used as an individual, vague parameter to describe how a robot is perceived by a human. However, such a simplification of HL is often not sufficient given the complexity and multidimensionality of human-robot interaction. Therefore, HL must be seen as a variable influenced by a network of parameter fields. The first goal of this paper is to introduce such a network which systematically characterizes all relevant aspects of HL. The network is subdivided into ten parameter fields, five describing static aspects of appearance and five describing dynamic aspects of behavior. The second goal of this paper is to propose a methodology to quantify the impact of single or multiple parameters out of these fields on perceived HL. Prior to quantification, the minimal perceivable difference, i.e. the threshold of perception, is determined for the parameters of interest in a first experiment. Thereafter, these parameters are modified in whole-number multiple of the threshold of perception to investigate their influence on perceived HL in a second experiment. This methodology was illustrated on the parameters speed and sequencing (onset of joint movements) of the parameter field movement as well as on the parameter sound. Results revealed that the perceived HL is more sensitive to changes in sequencing than to changes in speed. The sound of the motors during the movement also reduced perceived HL. The presented methodology should guide further, systematic explorations of the proposed network of HL parameters in order to determine and optimize acceptance of humanoid robots
Circling Around the Uncanny Valley: Design Principles for Research Into the Relation Between Human Likeness and Eeriness
The uncanny valley effect (UVE) is a negative emotional response experienced when encountering entities that appear almost human. Research on the UVE typically investigates individual, or collections of, near human entities but may be prone to methodological circularity unless the properties that give rise to the emotional response are appropriately defined and quantified.
In addition, many studies do not sufficiently control the variation in human likeness portrayed in stimulus images, meaning that the nature of stimuli that elicit the UVE is also not well defined or quantified. This article describes design criteria for UVE research to overcome the above problems by measuring three variables (human likeness, eeriness, and emotional response) and by using stimuli spanning the artificial to human continuum. These criteria allow results to be
plotted and compared with the hypothesized uncanny valley curve and any effect observed can be quantified. The above criteria were applied to the methods used in a subset of existing UVE studies. Although many studies made use of some of the necessary measurements and controls, few used them all. The UVE is discussed in relation to this result and research methodology more
broadly
Human-Likeness Indicator for Robot Posture Control and Balance
Similarly to humans, humanoid robots require posture control and balance to
walk and interact with the environment. In this work posture control in
perturbed conditions is evaluated as a performance test for humanoid control. A
specific performance indicator is proposed: the score is based on the
comparison between the body sway of the tested humanoid standing on a moving
surface and the sway produced by healthy subjects performing the same
experiment. This approach is here oriented to the evaluation of a
human-likeness. The measure is tested using a humanoid robot in order to
demonstrate a typical usage of the proposed evaluation scheme and an example of
how to improve robot control on the basis of such a performance indicator scoreComment: 16 pages, 5 Figures. arXiv admin note: substantial text overlap with
arXiv:2110.1439
The distracted robot: what happens when artificial agents behave like us
In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits.
A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents.
Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
Gestures that accompany speech are an essential part of natural and efficient
embodied human communication. The automatic generation of such co-speech
gestures is a long-standing problem in computer animation and is considered an
enabling technology in film, games, virtual social spaces, and for interaction
with social robots. The problem is made challenging by the idiosyncratic and
non-periodic nature of human co-speech gesture motion, and by the great
diversity of communicative functions that gestures encompass. Gesture
generation has seen surging interest recently, owing to the emergence of more
and larger datasets of human gesture motion, combined with strides in
deep-learning-based generative models, that benefit from the growing
availability of data. This review article summarizes co-speech gesture
generation research, with a particular focus on deep generative models. First,
we articulate the theory describing human gesticulation and how it complements
speech. Next, we briefly discuss rule-based and classical statistical gesture
synthesis, before delving into deep learning approaches. We employ the choice
of input modalities as an organizing principle, examining systems that generate
gestures from audio, text, and non-linguistic input. We also chronicle the
evolution of the related training data sets in terms of size, diversity, motion
quality, and collection method. Finally, we identify key research challenges in
gesture generation, including data availability and quality; producing
human-like motion; grounding the gesture in the co-occurring speech in
interaction with other speakers, and in the environment; performing gesture
evaluation; and integration of gesture synthesis into applications. We
highlight recent approaches to tackling the various key challenges, as well as
the limitations of these approaches, and point toward areas of future
development.Comment: Accepted for EUROGRAPHICS 202
- …