153 research outputs found

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Reinforcement Learning Approaches in Social Robotics

    Full text link
    This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and social robotics, it can be a well-suited approach for real-world interactions with physically embodied social robots. The scope of the paper is focused particularly on studies that include social physical robots and real-world human-robot interactions with users. We present a thorough analysis of reinforcement learning approaches in social robotics. In addition to a survey, we categorize existent reinforcement learning approaches based on the used method and the design of the reward mechanisms. Moreover, since communication capability is a prominent feature of social robots, we discuss and group the papers based on the communication medium used for reward formulation. Considering the importance of designing the reward function, we also provide a categorization of the papers based on the nature of the reward. This categorization includes three major themes: interactive reinforcement learning, intrinsically motivated methods, and task performance-driven methods. The benefits and challenges of reinforcement learning in social robotics, evaluation methods of the papers regarding whether or not they use subjective and algorithmic measures, a discussion in the view of real-world reinforcement learning challenges and proposed solutions, the points that remain to be explored, including the approaches that have thus far received less attention is also given in the paper. Thus, this paper aims to become a starting point for researchers interested in using and applying reinforcement learning methods in this particular research field

    Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions

    Get PDF
    Engagement is crucial to designing intelligent systems that can adapt to the characteristics of their users. This paper focuses on automatic analysis and classification of engagement based on humans’ and robot’s personality profiles in a triadic human-human-robot interaction setting. More explicitly, we present a study that involves two participants interacting with a humanoid robot, and investigate how participants’ personalities can be used together with the robot’s personality to predict the engagement state of each participant. The fully automatic system is firstly trained to predict the Big Five personality traits of each participant by extracting individual and interpersonal features from their nonverbal behavioural cues. Secondly, the output of the personality prediction system is used as an input to the engagement classification system. Thirdly, we focus on the concept of “group engagement”, which we define as the collective engagement of the participants with the robot, and analyse the impact of similar and dissimilar personalities on the engagement classification. Our experimental results show that (i) using the automatically predicted personality labels for engagement classification yields an F-measure on par with using the manually annotated personality labels, demonstrating the effectiveness of the automatic personality prediction module proposed; (ii) using the individual and interpersonal features without utilising personality information is not sufficient for engagement classification, instead incorporating the participants’ and robot’s personalities with individual/interpersonal features increases engagement classification performance; and (iii) the best classification performance is achieved when the participants and the robot are extroverted, while the worst results are obtained when all are introverted.This work was performed within the Labex SMART project (ANR-11-LABX-65) supported by French state funds managed by the ANR within the Investissements d’Avenir programme under reference ANR-11-IDEX-0004-02. The work of Oya Celiktutan and Hatice Gunes is also funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref.: EP/L00416X/1).This is the author accepted manuscript. The final version is available from Institute of Electrical and Electronics Engineers via http://dx.doi.org/10.1109/ACCESS.2016.261452

    Design and Experimental Evaluation of a Context-aware Social Gaze Control System for a Humanlike Robot

    Get PDF
    Nowadays, social robots are increasingly being developed for a variety of human-centered scenarios in which they interact with people. For this reason, they should possess the ability to perceive and interpret human non-verbal/verbal communicative cues, in a humanlike way. In addition, they should be able to autonomously identify the most important interactional target at the proper time by exploring the perceptual information, and exhibit a believable behavior accordingly. Employing a social robot with such capabilities has several positive outcomes for human society. This thesis presents a multilayer context-aware gaze control system that has been implemented as a part of a humanlike social robot. Using this system the robot is able to mimic the human perception, attention, and gaze behavior in a dynamic multiparty social interaction. The system enables the robot to direct appropriately its gaze at the right time to the environmental targets and humans who are interacting with each other and with the robot. For this reason, the attention mechanism of the gaze control system is based on features that have been proven to guide human attention: the verbal and non-verbal cues, proxemics, the effective field of view, the habituation effect, and the low-level visual features. The gaze control system uses skeleton tracking and speech recognition,facial expression recognition, and salience detection to implement the same features. As part of a pilot evaluation, the gaze behavior of 11 participants was collected with a professional eye-tracking device, while they were watching a video of two-person interactions. Analyzing the average gaze behavior of participants, the importance of human-relevant features in human attention triggering were determined. Based on this finding, the parameters of the gaze control system were tuned in order to imitate the human behavior in selecting features of environment. The comparison between the human gaze behavior and the gaze behavior of the developed system running on the same videos shows that the proposed approach is promising as it replicated human gaze behavior 89% of the time

    The Effects of Instructor-Avatar Immediacy in Second Life, an Immersive and Interactive 3D Virtual Environment

    Get PDF
    Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life®, enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-. These are enriched by avatar-mediated body language and physical manipulation of the environment. In this para-physical world, instructors and students alike employ avatars to establish their social presence in a wide variety of curricular and extra-curricular contexts. As a proxy for the human body in synthetic 3D environments, an avatar represents a \u27real\u27 human computer user and incorporates default behavior patterns (e.g., autonomous gestures such as changes in body orientation or movement of hands) as well as expressive movements directly controlled by the user through keyboard \u27shortcuts.\u27 Use of headset microphones and various stereophonic effects allows users to project their speech directly from the apparent location of their avatar. In addition, personalized information displays allow users to share graphical information, including text messages and hypertext links. These \u27channels\u27 of information constituted an integrated and dynamic framework for projecting avatar \u27immediacy\u27 behaviors (including gestures, intonation, and patterns of interaction with students), that may positively or negatively affect the degree to which other observers of the virtual world perceive the user represented by the avatar as \u27socially present\u27 in the virtual world. This study contributes to the nascent research on educational implementations of Second Life in higher education. Although education researchers have investigated the impact of instructor immediacy behaviors on student perception of instructor social presence, students\u27 satisfaction, motivation, and learning, few researchers have examined the effects of immediacy behaviors in a 3D virtual environment or the effects of immediacy behaviors manifested by avatars representing instructors. The study employed a two-factor experimental design to investigate the relationship between instructor avatars\u27 immediacy behaviors (high vs. low) and students\u27 perception of instructor immediacy, instructor social presence, student avatars co-presence and learning outcomes in Second Life. The study replicates and extends aspects of an earlier study conducted by Maria Schutt, Brock S. Allen, and Mark Laumakis, including components of the experimental treatments that manipulated the frequency of various types of immediacy behaviors identified by other researchers as potentially related to perception of social presence in face-to-face and mediated instruction. Participants were 281 students enrolled in an introductory psychology course at San Diego State University who were randomly assigned to one of four groups. Each group viewed a different version of the 28-minute teaching session in Second Life on current perspective in psychology. Data were gathered from student survey responses and tests on the lesson content. Analysis of variance revealed significant differences between the treatment groups (F (3,113) = 6.5,p = .000). Students who viewed the high immediacy machinimas (Group 1 HiHi and Group 2 HiLo) rated the immediacy behaviors of the instructor-avatar more highly than those who viewed the low-immediacy machinimas (Group 3 LoHi and Group 4 LoLo). Findings also demonstrate strong correlations between students\u27 perception of instructor avatar immediacy and instructor social presence (r = .769). These outcomes in the context of a 3D virtual world are consistent with findings on instructor immediacy and social presence literature in traditional and online classes. Results relative to learning showed that all groups tested higher after viewing the treatment, with no significant differences between groups. Recommendations for current and future practice of using instructor-avatars include paralanguage behaviors such as voice quality, emotion and prosodic features and nonverbal behaviors such as proxemics and gestures, facial expression, lip synchronization and eye contact

    Socially assistive robots : the specific case of the NAO

    Get PDF
    Numerous researches have studied the development of robotics, especially socially assistive robots (SAR), including the NAO robot. This small humanoid robot has a great potential in social assistance. The NAO robot’s features and capabilities, such as motricity, functionality, and affective capacities, have been studied in various contexts. The principal aim of this study is to gather every research that has been done using this robot to see how the NAO can be used and what could be its potential as a SAR. Articles using the NAO in any situation were found searching PSYCHINFO, Computer and Applied Sciences Complete and ACM Digital Library databases. The main inclusion criterion was that studies had to use the NAO robot. Studies comparing it with other robots or intervention programs were also included. Articles about technical improvements were excluded since they did not involve concrete utilisation of the NAO. Also, duplicates and articles with an important lack of information on sample were excluded. A total of 51 publications (1895 participants) were included in the review. Six categories were defined: social interactions, affectivity, intervention, assisted teaching, mild cognitive impairment/dementia, and autism/intellectual disability. A great majority of the findings are positive concerning the NAO robot. Its multimodality makes it a SAR with potential

    Data-Driven Approach to Human-Engaged Computing

    Get PDF
    This paper presents an overview of the research landscape of datadriven human-engaged computing in the Human-Computer Interaction Initiative at the Hong Kong University of Science and Technology

    Virtually Queer: Subjectivity Across Gender Boundaries in Second Life

    Get PDF
    This is an autoethnographic study of one person\u27s experience performing across gender lines in Second Life. Although there is a rich literature on gender crossing in virtual worlds, dating back to the text-only days, there are few ethnographic reports and fewer still from the vantage point of the performer. The paper is presented as a narrative recounting, alternating voice between the performed female persona in the virtual world and the author\u27s performed male identity in real life. Taking the perspective of Queer Theory, the paper problematizes gender performance in any world, and takes tentative steps toward expanding the notion of identity queering to nonsexual aspects of the individual as well as the prospect of queering reality itself

    The virtual maze: A behavioural tool for measuring trust

    Get PDF
    Trusting another person may depend on our level of generalised trust in others, as well as perceptions of that specific person's trustworthiness. However, many studies measuring trust outcomes have not discussed generalised versus specific trust. To measure specific trust in others, we developed a novel behavioural task. Participants navigate a virtual maze and make a series of decisions about how to proceed. Before each decision, they may ask for advice from two virtual characters they have briefly interviewed earlier. We manipulated the virtual characters' trustworthiness during the interview phase and measured how often participants approached and followed advice from each character. We also measured trust through ratings and an investment game. Across three studies we found participants followed advice from a trustworthy character significantly more than an untrustworthy character, demonstrating the validity of the maze task. Behaviour in the virtual maze reflected specific trust rather than generalised trust, whereas the investment game picked up on generalised trust as well as specific trust. Our data suggests the virtual maze task may provide an alternative behavioural approach to measuring specific trust in future research, and we demonstrate how the task may be used in traditional laboratories

    Computer mediated interpersonal relationships

    Get PDF
    • …
    corecore