252 research outputs found

    The interaction between voice and appearance in the embodiment of a robot tutor

    Get PDF
    Robot embodiment is, by its very nature, holistic and understanding how various aspects contribute to the user perception of the robot is non-trivial. A study is presented here that investigates whether there is an interaction effect between voice and other aspects of embodiment, such as movement and appearance, in a pedagogical setting. An on-line study was distributed to children aged 11–17 that uses a modified Godspeed questionnaire. We show an interaction effect between the robot embodiment and voice in terms of perceived lifelikeness of the robot. Politeness is a key strategy used in learning and teaching, and here an effect is also observed for perceived politeness. Interestingly, participants’ overall preference was for embodiment combinations that are deemed polite and more like a teacher, but are not necessarily the most lifelike. From these findings, we are able to inform the design of robotic tutors going forward

    Social Impact of Recharging Activity in Long-Term HRI and Verbal Strategies to Manage User Expectations During Recharge

    Get PDF
    Social robots perform tasks to help humans in their daily activities. However, if they fail to fulfill expectations this may affect their acceptance. This work investigates the service degradation caused by recharging, during which the robot is socially inactive. We describe two studies conducted in an ecologically valid office environment. In the first long-term study (3 weeks), we investigated the service degradation caused by the recharging behavior of a social robot. In the second study, we explored the social strategies used to manage users’ expectations during recharge. Our findings suggest that the use of verbal strategies (transparency, apology, and politeness) can make robots more acceptable to users during recharge

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Should robots be polite? Expectations about politeness in human–robot interaction

    Get PDF
    Interaction with artificial social agents is often designed based on models of human interaction and dialogue. While this is certainly useful for basic interaction mechanisms, it has been argued that social communication strategies and social language use, a “particularly human” ability, may not be appropriate and transferable to interaction with artificial conversational agents. In this paper, we present qualitative research exploring whether users expect artificial agents to use politeness—a fundamental mechanism of social communication—in language-based human-robot interaction. Based on semi-structured interviews, we found that humans mostly ascribe a functional, rule-based use of polite language to humanoid robots and do not expect them to apply socially motivated politeness strategies that they expect in human interaction. This study 1) provides insights for interaction design for social robots’ politeness use from a user perspective, and 2) contributes to politeness research based on the analysis of our participants’ perspectives on politeness

    Principles and Guidelines for Evaluating Social Robot Navigation Algorithms

    Full text link
    A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation. While the field of social navigation has advanced tremendously in recent years, the fair evaluation of algorithms that tackle social navigation remains hard because it involves not just robotic agents moving in static environments but also dynamic human agents and their perceptions of the appropriateness of robot behavior. In contrast, clear, repeatable, and accessible benchmarks have accelerated progress in fields like computer vision, natural language processing and traditional robot navigation by enabling researchers to fairly compare algorithms, revealing limitations of existing solutions and illuminating promising new directions. We believe the same approach can benefit social navigation. In this paper, we pave the road towards common, widely accessible, and repeatable benchmarking criteria to evaluate social robot navigation. Our contributions include (a) a definition of a socially navigating robot as one that respects the principles of safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and responsiveness to context, (b) guidelines for the use of metrics, development of scenarios, benchmarks, datasets, and simulators to evaluate social navigation, and (c) a design of a social navigation metrics framework to make it easier to compare results from different simulators, robots and datasets.Comment: 43 pages, 11 figures, 6 table

    Exploring Human Compliance Toward a Package Delivery Robot

    Get PDF
    Human-Robot Interaction (HRI) research on combat robots and autonomous carsdemonstrate faulty robots significantly decrease trust. However, HRI studies consistently show people overtrust domestic robots in households, emergency evacuation scenarios, and building security. This thesis presents how two theories, cognitive dissonance and selective attention, confound domestic HRI scenarios and uses the theory to design a novel HRI scenario with a package delivery robot in a public setting. Over 40 undergraduates were recruited within a university library to follow a package delivery robot to three stops, under the guise of “testing its navigation around people.” The second delivery was an open office which appeared private. Without labeling the packages, in 15 trials only 2 individuals entered the room at the second stop, whereas a pair of participants were much more likely to enter the room. Labeling the packages significantly increased the likelihood individuals would enter the office. The third stop was at the end of a long, isolated hallway blocked by a door marked “Emergency Exit Only. Alarm will Sound.” No one seriously thought about opening the door. Nonverbal robot prods such as waiting one minute or nudging the door were perceived as malfunctioning behavior. To demonstrate selective attention, a second route led to an emergency exit door in a public computer lab, with the intended destination an office several feet away. When the robot communicated with beeps only 45% of individuals noticed the emergency exit door. No one noticed the emergency exit door when the robot used speech commands, although its qualitative rating significantly improved. In conclusion, this thesis shows robots must make explicit requests to generate overtrust. Explicit interactions increase participant engagement with the robot, which increases selective attention towards their environment

    Considering the Context to Build Theory in HCI, HRI, and HMC: Explicating Differences in Processes of Communication and Socialization with Social Technologies

    Get PDF
    The proliferation and integration of social technologies has occurred quickly, and the specific technologies with which we engage are ever-changing. The dynamic nature of the development and use of social technologies is often acknowledged by researchers as a limitation. In this manuscript, however, we present a discussion on the implications of our modern technological context by focusing on processes of socialization and communication that are fundamentally different from their interpersonal corollary. These are presented and discussed with the goal of providing theoretical building blocks toward a more robust understanding of phenomena of human-computer interaction, human-robot interaction, human-machine communication, and interpersonal communication
    corecore