18 research outputs found

    A bizarre virtual trainer outperforms a human trainer in foreign language word learning

    No full text
    In this study, the effects that a human trainer and a pedagogical virtual agent have on the memory for words in a foreign language (L2) were investigated. In a recent study on L2 word learning, Bergmann and Macedonia (2013) cued participants to memorize novel words both audiovisually and by performing additional gestures. The gestures were performed by both a human and a virtual trainer. In some of the tests, the virtual agent had a greater positive influence on memory performance than the human trainer. In order to determine why the agent was a better trainer than the human, 18 naive subjects were invited to rate the gestures performed by both trainers. Furthermore, participants were asked to evaluate their perception of the human and the agent. It was hypothesized that the gestures performed by the agent would be more peculiar than those by the human and possibly attract greater attention. It was also hypothesized that the agent’s personality might be more appealing than that of the human. The results showed that the agent’s gestures were perceived as less natural than those of the human. This might have triggered greater attention and/ or more emotional involvement of the participants. The perception of both trainers as “personalities” did not differ, with the exception of a few traits for which the human trainer was considered to be better. Altogether, because of the peculiar gestures it made and because of its looks, the agent may have been perceived as bizarre. Therefore, he might have induced the bizarreness effect in the memory for words

    Individualized Gesturing Outperforms Average Gesturing – Evaluating Gesture Production in Virtual Humans

    Get PDF
    Bergmann K, Kopp S, Eyssel FA. Individualized Gesturing Outperforms Average Gesturing – Evaluating Gesture Production in Virtual Humans. In: Allbeck J, Badler N, Bickmore T, Pelachaud C, Safonova A, eds. Intelligent Virtual Agents. Proceedings of the 10th International Conference on Intelligent Virtual Agents. Lecture Notes in Computer Science. Vol 6356. Berlin/Heidelberg, Germany: Springer; 2010: 104-117.How does a virtual agent’s gesturing behavior influence the user’s perception of communication quality and the agent’s personality? This question was investigated in an evaluation study of co-verbal iconic gestures produced with the Bayesian network-based production model GNetIc. A network learned from a corpus of several speakers was compared with networks learned from individual speaker data, as well as two control conditions. Results showed that automatically GNetIc-generated gestures increased the perceived quality of an object description given by a virtual human. Moreover, the virtual agent showing gesturing behavior generated with individual speaker networks was rated more positively in terms of likeability, competence and human-likeness

    Gestures in human-robot interaction

    Get PDF
    Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten ĂŒbermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit fĂŒr einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgefĂŒhrt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu ĂŒbermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hĂ€ngt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung fĂŒr die intuitive Mensch-Roboter-Interaktion wird zunĂ€chst ein Aufmerksamkeitsmodell fĂŒr humanoide Roboter entwickelt. Danach wird ein Verfahren fĂŒr die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode fĂŒr das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt.Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented

    A large, crowdsourced evaluation of gesture generation systems on common data : the GENEA Challenge 2020

    Get PDF
    Co-speech gestures, gestures that accompany speech, play an important role in human communication. Automatic co-speech gesture generation is thus a key enabling technology for embodied conversational agents (ECAs), since humans expect ECAs to be capable of multi-modal communication. Research into gesture generation is rapidly gravitating towards data-driven methods. Unfortunately, individual research efforts in the field are difficult to compare: there are no established benchmarks, and each study tends to use its own dataset, motion visualisation, and evaluation methodology. To address this situation, we launched the GENEA Challenge, a gesture-generation challenge wherein participating teams built automatic gesture-generation systems on a common dataset, and the resulting systems were evaluated in parallel in a large, crowdsourced user study using the same motion-rendering pipeline. Since differences in evaluation outcomes between systems now are solely attributable to differences between the motion-generation methods, this enables benchmarking recent approaches against one another in order to get a better impression of the state of the art in the field. This paper reports on the purpose, design, results, and implications of our challenge.Part of Proceedings: ISBN 978-145038017-1QC 20210607</p

    The student-produced electronic portfolio in craft education

    Get PDF
    The authors studied primary school students’ experiences of using an electronic portfolio in their craft education over four years. A stimulated recall interview was applied to collect user experiences and qualitative content analysis to analyse the collected data. The results indicate that the electronic portfolio was experienced as a multipurpose tool to support learning. It makes the learning process visible and in that way helps focus on and improves the quality of learning. © ISLS.Peer reviewe

    Le mouvement expressif du corps entier : variabilités intra-individuelles dans des contextes affectifs et interactifs

    Get PDF
    Movement is a major component of our daily life. Every day we use it to both perform simple and essential task and to communicate. Intentionnal or not our movements sign our intraindividual and interindividual differences liked to our status, intentions and affects. In the same day, the cinematic of our movements evolve and adapt according of our social environnement. Distinction between movements of robot – virtual character and human movement is that the latter can vary. Indeed an identical movement made twice by a human will not be perfectly the same. However this specificity tends to change. From Darwin’s first works studying the impact of affect on movement to recent studies about movement expressivity in various interactive context (e.g. man-woman, student-professor interaction) and various applications (e.g. Autism, Exergames) researchers and entreprises seek to implement this human specificity in human computer interaction (HCI). Based on social science, movement science and computer science, this multidisciplinary work contributes to the understanding of action and perception of expressive movements thanks to three studies in coach-student sport context interaction. The first study aims at understanding how the perceived affect impact the expressivity of human movement. In the second study we examine dyadic interactions involving different status of participants and social set-up. Finaly we desgined an expressive full-body virtual agent and used it in an interactive trask. The main contribution of this PhD thesis is to show that expressivity features computed from different time-series (Energy, CaractĂšre franc,rigiditĂ© and sptatial extent) are relevant to discriminate participants’ affects, status and thoughts. One goal and possible application of this work is the design of a virtual trainer allowing credible and dynamic full-body interactions thanks to its expressive movements.Le mouvement est une composante primordiale et nĂ©cessaire de notre existence. Nous l’utilisons tous les jours pour accomplir des tĂąches simples et essentielles, mais Ă©galement pour communiquer. Que cela soit intentionnel ou non, nos mouvements signent nos diffĂ©rences interindividuelles, mais aussi intra-individuelles liĂ©es Ă  nos Ă©tats Ă©motionnels, notre status, et nos intentions. Au cours d’une mĂȘme journĂ©e, la cinĂ©matique de nos mouvements tend inĂ©luctablement Ă  Ă©voluer et Ă  s’adapter en fonction de notre environnement social. Ce qui diffĂ©rencie le mouvement des robots et des personnages virtuels de celui des humains est sa capacitĂ© Ă  varier, un humain ne reproduisant jamais deux mouvements identiques. NĂ©anmoins, ce contraste est de moins en moins Ă©vident. Depuis Darwin et ses travaux sur l’impact des Ă©motions dans le mouvement jusqu’aux plus rĂ©centes Ă©tudes sur l’expressivitĂ© du mouvement dans des contextes d’interactions (p. ex. homme femme, interaction professeur-Ă©lĂšve) et d’applications variĂ©es (p. ex. autisme, exergame), les chercheurs et entreprises cherchent Ă  implĂ©menter la variabilitĂ© du mouvement biologique humain dans les interactions homme-machine (IHM). En s’appuyant sur les Sciences sociales, du mouvement et de l’informatique, ce travail doctoral multidisciplinaire contribue Ă  la comprĂ©hension de l’action et de la perception de mouvement expressif Ă  travers trois Ă©tudes dans un contexte sportif d’interaction coach-Ă©lĂšve. La premiĂšre Ă©tude a pour objectif de comprendre comment l’expressivitĂ© du mouvement humain signe l’émotion perçue. Dans la seconde Ă©tude, nous envisageons plusieurs dyades composĂ©es de participants dont les status de passations et les conditions expĂ©rimentales Ă©voluaient. Enfin, une derniĂšre expĂ©rience orientĂ©e IHM a Ă©tĂ© rĂ©alisĂ©e. Au cours de celle-ci, des personnages virtuels expressifs ont Ă©tĂ© conçus pour interagir non verbalement avec les participants. Les rĂ©sultats de ces travaux permettent de mettre en Ă©vidence que certains paramĂštres de l’expressivitĂ© du mouvement issue de sĂ©ries temporelles (ST) (Energie, CaratĂšre direct, RigiditĂ© et l’étendue) sont nĂ©cessaires pour discriminer les affects, les status et les ressentis des participants au sein des interactions. La visĂ©e applicative de ce travail doctoral est la crĂ©ation d’un coach virtuel qui, au moyen de ces mouvements expressifs, permet une interaction dynamique et crĂ©dible

    Integrating Socially Assistive Robots into Language Tutoring Systems. A Computational Model for Scaffolding Young Children's Foreign Language Learning

    Get PDF
    Schodde T. Integrating Socially Assistive Robots into Language Tutoring Systems. A Computational Model for Scaffolding Young Children's Foreign Language Learning. Bielefeld: UniversitĂ€t Bielefeld; 2019.Language education is a global and important issue nowadays, especially for young children since their later educational success build on it. But learning a language is a complex task that is known to work best in a social interaction and, thus, personalized sessions tailored to the individual knowledge and needs of each child are needed to allow for teachers to optimally support them. However, this is often costly regarding time and personnel resources, which is one reasons why research of the past decades investigated the benefits of Intelligent Tutoring Systems (ITSs). But although ITSs can help out to provide individualized one-on-one tutoring interactions, they often lack of social support. This dissertation provides new insights on how a Socially Assistive Robot (SAR) can be employed as a part of an ITS, building a so-called "Socially Assistive Robot Tutoring System" (SARTS), to provide social support as well as to personalize and scaffold foreign language learning for young children in the age of 4-6 years. As basis for the SARTS a novel approach called A-BKT is presented, which allows to autonomously adapt the tutoring interaction to the children's individual knowledge and needs. The corresponding evaluation studies show that the A-BKT model can significantly increase student's learning gains and maintain a higher engagement during the tutoring interaction. This is partly due to the models ability to simulate the influences of potential actions on all dimensions of the learning interaction, i.e., the children's learning progress (cognitive learning), affective state, engagement (affective learning) and believed knowledge acquisition (perceived learning). This is particularly important since all dimensions are strongly interconnected and influence each other, for example, a low engagement can cause bad learning results although the learner is already quite proficient. However, this also yields the necessity to not only focus on the learner's cognitive learning but to equally support all dimensions with appropriate scaffolding actions. Therefore an extensive literature review, observational video recordings and expert interviews were conducted to find appropriate actions applicable for a SARTS to support each learning dimension. The subsequent evaluation study confirms that the developed scaffolding techniques are able to support young children’s learning process either by re-engaging them or by providing transparency to support their perception of the learning process and to reduce uncertainty. Finally, based on educated guesses derived from the previous studies, all identified strategies are integrated into the A-BKT model. The resulting model called ProTM is evaluated by simulating different learner types, which highlight its ability to autonomously adapt the tutoring interactions based on the learner's answers and provided dis-engagement cues. Summarized, this dissertation yields new insights into the field of SARTS to provide personalized foreign language learning interactions for young children, while also rising new important questions to be studied in the future

    Social Perception of Pedestrians and Virtual Agents Using Movement Features

    Get PDF
    In many tasks such as navigation in a shared space, humans explicitly or implicitly estimate social information related to the emotions, dominance, and friendliness of other humans around them. This social perception is critical in predicting others’ motions or actions and deciding how to interact with them. Therefore, modeling social perception is an important problem for robotics, autonomous vehicle navigation, and VR and AR applications. In this thesis, we present novel, data-driven models for the social perception of pedestrians and virtual agents based on their movement cues, including gaits, gestures, gazing, and trajectories. We use deep learning techniques (e.g., LSTMs) along with biomechanics to compute the gait features and combine them with local motion models to compute the trajectory features. Furthermore, we compute the gesture and gaze representations using psychological characteristics. We describe novel mappings between these computed gaits, gestures, gazing, and trajectory features and the various components (emotions, dominance, friendliness, approachability, and deception) of social perception. Our resulting data-driven models can identify the dominance, deception, and emotion of pedestrians from videos with an accuracy of more than 80%. We also release new datasets to evaluate these methods. We apply our data-driven models to socially-aware robot navigation and the navigation of autonomous vehicles among pedestrians. Our method generates robot movement based on pedestrians’ dominance levels, resulting in higher rapport and comfort. We also apply our data-driven models to simulate virtual agents with desired emotions, dominance, and friendliness. We perform user studies and show that our data-driven models significantly increase the user’s sense of social presence in VR and AR environments compared to the baseline methods.Doctor of Philosoph

    Affective Brain-Computer Interfaces

    Get PDF
    corecore