115 research outputs found

    Robot's Gendering Trouble: A Scoping Review of Gendering Humanoid Robots and its Effects on HRI

    Full text link
    The discussion around the problematic practice of gendering humanoid robots has risen to the foreground in the last few years. To lay the basis for a thorough understanding of how robot's "gender" has been understood within the Human-Robot Interaction (HRI) community - i.e., how it has been manipulated, in which contexts, and which effects it has yield on people's perceptions and interactions with robots - we performed a scoping review of the literature. We identified 553 papers relevant for our review retrieved from 5 different databases. The final sample of reviewed papers included 35 papers written between 2005 and 2021, which involved a total of 3902 participants. In this article, we thoroughly summarize these papers by reporting information about their objectives and assumptions on gender (i.e., definitions and reasons to manipulate gender), their manipulation of robot's "gender" (i.e., gender cues and manipulation checks), their experimental designs (e.g., demographics of participants, employed robots), and their results (i.e., main and interaction effects). The review reveals that robot's "gender" does not affect crucial constructs for the HRI, such as likability and acceptance, but rather bears its strongest effect on stereotyping. We leverage our different epistemological backgrounds in Social Robotics and Gender Studies to provide a comprehensive interdisciplinary perspective on the results of the review and suggest ways to move forward in the field of HRI.Comment: 29 pages, 1 figure, 3 long tables. The present paper has been submitted for publication to the International Journal of Social Robotics and is currently under revie

    VR Investigation on Caregivers’ Tolerance towards Communication and Processing Failures

    Get PDF
    This article was supported by the German Research Foundation (DFG) and the Open Access Publication Fund of Humboldt-UniversitĂ€t zu Berlin.Robots are increasingly used in healthcare to support caregivers in their daily work routines. To ensure an effortless and easy interaction between caregivers and robots, communication via natural language is expected from robots. However, robotic speech bears a large potential for technical failures, which includes processing and communication failures. It is therefore necessary to investigate how caregivers perceive and respond to robots with erroneous communication. We recruited thirty caregivers, who interacted in a virtual reality setting with a robot. It was investigated whether different kinds of failures are more likely to be forgiven with technical or human-like justifications. Furthermore, we determined how tolerant caregivers are with a robot constantly returning a process failure and whether this depends on the robot’s response pattern (constant vs. variable). Participants showed the same forgiveness towards the two justifications. However, females liked the human-like justification more and males liked the technical justification more. Providing justifications with any reasonable content seems sufficient to achieve positive effects. Robots with a constant response pattern were liked more, although both patterns achieved the same tolerance threshold from caregivers, which was around seven failed requests. Due to the experimental setup, the tolerance for communication failures was probably increased and should be adjusted in real-life situations

    The Effects of Robot Voices and Appearances on Users\u27 Emotion Recognition and Subjective Perception

    Get PDF
    As the influence of social robots in people\u27s daily lives grows, research on understanding people\u27s perception of robots including sociability, trust, acceptance, and preference becomes more pervasive. Research has considered visual, vocal, or tactile cues to express robots\u27 emotions, whereas little research has provided a holistic view in examining the interactions among different factors influencing emotion perception. We investigated multiple facets of user perception on robots during a conversational task by varying the robots\u27 voice types, appearances, and emotions. In our experiment, 20 participants interacted with two robots having four different voice types. While participants were reading fairy tales to the robot, the robot gave vocal feedback with seven emotions and the participants evaluated the robot\u27s profiles through post surveys. The results indicate that (1) the accuracy of emotion perception differed depending on presented emotions, (2) a regular human voice showed higher user preferences and naturalness, (3) but a characterized voice was more appropriate for expressing emotions with significantly higher accuracy in emotion perception, and (4) participants showed significantly higher emotion recognition accuracy with the animal robot than the humanoid robot. A follow-up study (N=10) with voice-only conditions confirmed that the importance of embodiment. The results from this study could provide the guidelines needed to design social robots that consider emotional aspects in conversations between robots and users

    A systematic review of attitudes, anxiety, acceptance, and trust towards social robots

    Get PDF
    As social robots become more common, there is a need to understand how people perceive and interact with such technology. This systematic review seeks to estimate people’s attitudes toward, trust in, anxiety associated with, and acceptance of social robots; as well as factors that are associated with these beliefs. Ninety-seven studies were identified with a combined sample of over 13,000 participants and a standardized score was computed for each in order to represent the valence (positive, negative, or neutral) and magnitude (on a scale from 1 to − 1) of people’s beliefs about robots. Potential moderating factors such as the robots’ domain of application and design, the type of exposure to the robot, and the characteristics of potential users were also investigated. The findings suggest that people generally have positive attitudes towards social robots and are willing to interact with them. This finding may challenge some of the existing doubt surrounding the adoption of robotics in social domains of application but more research is needed to fully understand the factors that influence attitudes

    Indices for Virtual Service Agent Design: Cross-Cultural Evaluation

    Get PDF
    While localization helps to create websites and mobile apps for specific target markets, not as much attention was devoted to the area of affective virtual service agents. The situation is changing due to advances in affective computing and artificial intelligence. Virtual service agents have the potential to change the way how people interact with information technology by transforming control method from physical gestures to natural language conversation. By having human-like characteristics, the agents can transform impersonal service experience to personal and make an emotional impression on the user or customer. Such message can take different forms and interpretations, depending on national culture and other context. Qualitative data from interviews with experts were used to identify differences in how they are viewed in Sweden and Japan. A survey was then used to quantify the differences using a sample of participants, who were asked to rate the likability and trustworthiness of agents with varying ethnicity, gender and age. The impact of visible visual attributes on their trustworthiness and likability is analysed on a familiar example with virtual service agents at an airport. It was found that each group favours their familiar communication style and recommendations on virtual service agent localization are given

    Why context matters: The influence of application domain on preferred degree of anthropomorphism and gender attribution in human–robot interaction

    Get PDF
    The application of anthropomorphic design features is widely believed to facilitate human–robot interaction. However, the preference for robots’ anthropomorphism is highly context sensitive, as different application domains induce different expectations towards robots. In this study the influence of application domain on the preferred degree of anthropomorphism is examined. Moreover, as anthropomorphic design can reinforce existing gender stereotypes of different work domains, gender associations were investigated. Therefore, participants received different context descriptions and subsequently selected and named one robot out of differently anthropomorphic robots in an online survey. The results indicate that lower degrees of anthropomorphism are preferred in the industrial domain and higher degrees of anthropomorphism in the social domain, whereas no clear preference was found in the service domain. Unexpectedly, mainly functional names were ascribed to the robots and if human names were chosen, male names were given more frequently than female names even in the social domain. The results support the assumption that the preferred degree of anthropomorphism depends on the context. Hence, the sociability of a domain might determine to what extent anthropomorphic design features are suitable. Furthermore, the results indicate that robots are overall associated more functional, than gendered (and if gendered then masculine). Therefore, the design features of robots should enhance functionalities, rather than specific gendered anthropomorphic attributes to avoid stereotypes and not further reinforce the association of masculinity and technology

    Building and Designing Expressive Speech Synthesis

    Get PDF
    We know there is something special about speech. Our voices are not just a means of communicating. They also give a deep impression of who we are and what we might know. They can betray our upbringing, our emotional state, our state of health. They can be used to persuade and convince, to calm and to excite. As speech systems enter the social domain they are required to interact, support and mediate our social relationships with 1) each other, 2) with digital information, and, increasingly, 3) with AI-based algorithms and processes. Socially Interactive Agents (SIAs) are at the fore- front of research and innovation in this area. There is an assumption that in the future “spoken language will provide a natural conversational interface between human beings and so-called intelligent systems.” [Moore 2017, p. 283]. A considerable amount of previous research work has tested this assumption with mixed results. However, as pointed out “voice interfaces have become notorious for fostering frustration and failure” [Nass and Brave 2005, p.6]. It is within this context, between our exceptional and intelligent human use of speech to communicate and interact with other humans, and our desire to leverage this means of communication for artificial systems, that the technology, often termed expressive speech synthesis uncomfortably falls. Uncomfortably, because it is often overshadowed by issues in interactivity and the underlying intelligence of the system which is something that emerges from the interaction of many of the components in a SIA. This is especially true of what we might term conversational speech, where decoupling how things are spoken, from when and to whom they are spoken, can seem an impossible task. This is an even greater challenge in evaluation and in characterising full systems which have made use of expressive speech. Furthermore when designing an interaction with a SIA, we must not only consider how SIAs should speak but how much, and whether they should even speak at all. These considerations cannot be ignored. Any speech synthesis that is used in the context of an artificial agent will have a perceived accent, a vocal style, an underlying emotion and an intonational model. Dimensions like accent and personality (cross speaker parameters) as well as vocal style, emotion and intonation during an interaction (within-speaker parameters) need to be built in the design of a synthetic voice. Even a default or neutral voice has to consider these same expressive speech synthesis components. Such design parameters have a strong influence on how effectively a system will interact, how it is perceived and its assumed ability to perform a task or function. To ignore these is to blindly accept a set of design decisions that ignores the complex effect speech has on the user’s successful interaction with a system. Thus expressive speech synthesis is a key design component in SIAs. This chapter explores the world of expressive speech synthesis, aiming to act as a starting point for those interested in the design, building and evaluation of such artificial speech. The debates and literature within this topic are vast and are fundamentally multidisciplinary in focus, covering a wide range of disciplines such as linguistics, pragmatics, psychology, speech and language technology, robotics and human-computer interaction (HCI), to name a few. It is not our aim to synthesise these areas but to give a scaffold and a starting point for the reader by exploring the critical dimensions and decisions they may need to consider when choosing to use expressive speech. To do this, the chapter explores the building of expressive synthesis, highlighting key decisions and parameters as well as emphasising future challenges in expressive speech research and development. Yet, before these are expanded upon we must first try and define what we actually mean by expressive speech

    Freedom comes at a cost?: An exploratory study on affordances’ impact on users’ perception of a social robot

    Get PDF
    Along with the development of speech and language technologies, the market for speech-enabled human-robot interactions (HRI) has grown in recent years. However, it is found that people feel their conversational interactions with such robots are far from satisfactory. One of the reasons is the habitability gap, where the usability of a speech-enabled agent drops when its flexibility increases. For social robots, such flexibility is reflected in the diverse choice of robots’ appearances, sounds and behaviours, which shape a robot’s ‘affordance’. Whilst designers or users have enjoyed the freedom of constructing a social robot by integrating off-the-shelf technologies, such freedom comes at a potential cost: the users’ perceptions and satisfaction. Designing appropriate affordances is essential for the quality of HRI. It is hypothesised that a social robot with aligned affordances could create an appropriate perception of the robot and increase users’ satisfaction when speaking with it. Given that previous studies of affordance alignment mainly focus on one interface’s characteristics and face-voice match, we aim to deepen our understanding of affordance alignment with a robot’s behaviours and use cases. In particular, we investigate how a robot’s affordances affect users’ perceptions in different types of use cases. For this purpose, we conducted an exploratory experiment that included three different affordance settings (adult-like, child-like, and robot-like) and three use cases (informative, emotional, and hybrid). Participants were invited to talk to social robots in person. A mixed-methods approach was employed for quantitative and qualitative analysis of 156 interaction samples. The results show that static affordance (face and voice) has a statistically significant effect on the perceived warmth of the first impression; use cases affect people’s perceptions more on perceived competence and warmth before and after interactions. In addition, it shows the importance of aligning static affordance with behavioural affordance. General design principles of behavioural affordances are proposed. We anticipate that our empirical evidence will provide a clearer guideline for speech-enabled social robots’ affordance design. It will be a starting point for more sophisticated design guidelines. For example, personalised affordance design for individual or group users in different contexts

    Human-Machine Communication: Complete Volume. Volume 1

    Get PDF
    This is the complete volume of HMC Volume 1
    • 

    corecore