2,711 research outputs found

    Multimodal Dialogue Management for Multiparty Interaction with Infants

    Full text link
    We present dialogue management routines for a system to engage in multiparty agent-infant interaction. The ultimate purpose of this research is to help infants learn a visual sign language by engaging them in naturalistic and socially contingent conversations during an early-life critical period for language development (ages 6 to 12 months) as initiated by an artificial agent. As a first step, we focus on creating and maintaining agent-infant engagement that elicits appropriate and socially contingent responses from the baby. Our system includes two agents, a physical robot and an animated virtual human. The system's multimodal perception includes an eye-tracker (measures attention) and a thermal infrared imaging camera (measures patterns of emotional arousal). A dialogue policy is presented that selects individual actions and planned multiparty sequences based on perceptual inputs about the baby's internal changing states of emotional engagement. The present version of the system was evaluated in interaction with 8 babies. All babies demonstrated spontaneous and sustained engagement with the agents for several minutes, with patterns of conversationally relevant and socially contingent behaviors. We further performed a detailed case-study analysis with annotation of all agent and baby behaviors. Results show that the baby's behaviors were generally relevant to agent conversations and contained direct evidence for socially contingent responses by the baby to specific linguistic samples produced by the avatar. This work demonstrates the potential for language learning from agents in very young babies and has especially broad implications regarding the use of artificial agents with babies who have minimal language exposure in early life

    RoboTalk - Prototyping a Humanoid Robot as Speech-to-Sign Language Translator

    Get PDF
    Information science mostly focused on sign language recognition. The current study instead examines whether humanoid robots might be fruitful avatars for sign language translation. After a review of research into sign language technologies, a survey of 50 deaf participants regarding their preferences for potential reveals that humanoid robots represent a promising option. The authors also 3D-printed two arms of a humanoid robot, InMoov, with special joints for the index finger and thumb that would provide it with additional degrees of freedom to express sign language. They programmed the robotic arms with German sign language and integrated it with a voice recognition system. Thus this study provides insights into human–robot interactions in the context of sign language translation; it also contributes ideas for enhanced inclusion of deaf people into society

    Integrating Socially Assistive Robots into Language Tutoring Systems. A Computational Model for Scaffolding Young Children's Foreign Language Learning

    Get PDF
    Schodde T. Integrating Socially Assistive Robots into Language Tutoring Systems. A Computational Model for Scaffolding Young Children's Foreign Language Learning. Bielefeld: UniversitĂ€t Bielefeld; 2019.Language education is a global and important issue nowadays, especially for young children since their later educational success build on it. But learning a language is a complex task that is known to work best in a social interaction and, thus, personalized sessions tailored to the individual knowledge and needs of each child are needed to allow for teachers to optimally support them. However, this is often costly regarding time and personnel resources, which is one reasons why research of the past decades investigated the benefits of Intelligent Tutoring Systems (ITSs). But although ITSs can help out to provide individualized one-on-one tutoring interactions, they often lack of social support. This dissertation provides new insights on how a Socially Assistive Robot (SAR) can be employed as a part of an ITS, building a so-called "Socially Assistive Robot Tutoring System" (SARTS), to provide social support as well as to personalize and scaffold foreign language learning for young children in the age of 4-6 years. As basis for the SARTS a novel approach called A-BKT is presented, which allows to autonomously adapt the tutoring interaction to the children's individual knowledge and needs. The corresponding evaluation studies show that the A-BKT model can significantly increase student's learning gains and maintain a higher engagement during the tutoring interaction. This is partly due to the models ability to simulate the influences of potential actions on all dimensions of the learning interaction, i.e., the children's learning progress (cognitive learning), affective state, engagement (affective learning) and believed knowledge acquisition (perceived learning). This is particularly important since all dimensions are strongly interconnected and influence each other, for example, a low engagement can cause bad learning results although the learner is already quite proficient. However, this also yields the necessity to not only focus on the learner's cognitive learning but to equally support all dimensions with appropriate scaffolding actions. Therefore an extensive literature review, observational video recordings and expert interviews were conducted to find appropriate actions applicable for a SARTS to support each learning dimension. The subsequent evaluation study confirms that the developed scaffolding techniques are able to support young children’s learning process either by re-engaging them or by providing transparency to support their perception of the learning process and to reduce uncertainty. Finally, based on educated guesses derived from the previous studies, all identified strategies are integrated into the A-BKT model. The resulting model called ProTM is evaluated by simulating different learner types, which highlight its ability to autonomously adapt the tutoring interactions based on the learner's answers and provided dis-engagement cues. Summarized, this dissertation yields new insights into the field of SARTS to provide personalized foreign language learning interactions for young children, while also rising new important questions to be studied in the future

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD

    Staying engaged in child-robot interaction:A quantitative approach to studying preschoolers’ engagement with robots and tasks during second-language tutoring

    Get PDF
    Inleiding Covid-19 heeft laten zien dat onze traditionele manier van lesgeven steeds meer afhankelijk is van digitale hulpmiddelen. In de afgelopen jaren (2020-2021) hebben leerkrachten kinderen online les moeten geven en hebben ouders hun kinderen moeten begeleiden bij hun lesactiviteiten. Digitale instrumenten die het onderwijs kunnen ondersteunen zoals sociale robots, zouden uiterst nuttig zijn geweest voor leerkrachten. Robots die, in tegenstelling tot tablets, hun lichaam kunnen gebruiken om zich vergelijkbaar te gedragen als leerkrachten. Bijvoorbeeld door te gebaren tijdens het praten, waardoor kinderen zich beter kunnen concentreren wat een voordeel oplevert voor hun leerprestaties. Bovendien stellen robots, meer dan tablets, kinderen in staat tot een sociale interactie, wat vooral belangrijk is bij het leren van een tweede taal (L2). Hierover ging mijn promotietraject wat onderdeel was van het Horizon 2020 L2TOR project1, waarin zes verschillende universiteiten en twee bedrijven samenwerkten en onderzochten of een robot aan kleuters woorden uit een tweede taal kon leren. Een van de belangrijkste vragen in dit project was hoe we gedrag van de robot konden ontwikkelen dat kinderen betrokken (engaged) houdt. Betrokkenheid van kinderen is belangrijk zodat zij tijdens langere tijdsperiodes met de robot aan de slag willen. Om deze vraag te beantwoorden, heb ik meerdere studies uitgevoerd om het effect van de robot op de betrokkenheid van kinderen met de robot te onderzoeken, alsmede onderzoek te doen naar de perceptie die de kinderen van de robot hadden. 1Het L2TOR project leverde een grote bijdrage binnen het mens-robot interactie veld in de beweging richting publieke wetenschap. Alle L2TOR publicaties, de project deliverables, broncode en data zijn openbaar gemaakt via de website www.l2tor.eu en via www.github.nl/l2tor en de meeste studies werden vooraf geregistreerd

    User Experience Design and Evaluation of Persuasive Social Robot As Language Tutor At University : Design And Learning Experiences From Design Research

    Get PDF
    Human Robot Interaction (HRI) is a developing field where research and innovation are progressing. One domain where Human Robot Interaction has focused is in the educational sector. Various research has been conducted in education field to design social robots with appropriate design guidelines derived from user preferences, context, and technology to help students and teachers to foster their learning and teaching experience. Language learning has become popular in education due to students receiving opportunities to study and learn any interested subjects in any language in their preferred universities around the world. Thus, being the reason behind the research of using social robots in language learning and teaching in education field. To this context this thesis explored the design of language tutoring robot for students learning Finnish language at university. In language learning, motivation, the learning experience, context, and user preferences are important to be considered. This thesis focuses on the Finnish language learning students through language tutoring social robot at Tampere University. The design research methodology is used to design the persuasive language tutoring social robot teaching Finnish language to the international students at Tampere University. The design guidelines and the future language tutoring robot design with their benefits are formed using Design Research methodology. Elias Robot, a language tutoring application designed by Curious Technologies, Finnish EdTech company was used in the explorative user study. The user study involved Pepper, Social robot along with the Elias robot application using Mobile device technology. The user study was conducted in university, the students include three male participants and four female participants. The aim of the study was to gather the design requirements based on learning experiences from social robot tutor. Based on this study findings and the design research findings, the future language tutoring social robot was co-created through co design workshop. Based on the findings from Field study, user study, technology acceptance model findings, design research findings, student interviews, the persuasive social robot language tutor was designed. The findings revealed all the multi modalities are required for the efficient tutoring of persuasive social robots and the social robots persuade motivation with students to learn the language. The design implications were discussed, and the design of social robot tutor are created through design scenarios

    Peer tutoring of computer programming increases exploratory behavior in children

    Get PDF
    There is growing interest in teaching computer science and programming skills in schools. Here we investigated the efficacy of peer tutoring, which is known to be a useful educational resource in other domains but never before has been examined in such a core aspect of applied logical thinking in children. We compared (a) how children (N = 42, age range = 7 years 1 month to 8 years 4 months) learn computer programming from an adult versus learning from a peer and (b) the effect of teaching a peer versus simply revising what has been learned. Our results indicate that children taught by a peer showed comparable overall performance—a combination of accuracy and response times—to their classmates taught by an adult. However, there was a speed–accuracy trade-off, and peer-taught children showed more exploratory behavior, with shorter response times at the expense of lower accuracy. In contrast, no tutor effects (i.e., resulting from teaching a peer) were found. Thus, our results provide empirical evidence in support of peer tutoring as a way to help teach computer programming to children. This could contribute to the promotion of a widespread understanding of how computers operate and how to shape them, which is essential to our values of democracy, plurality, and freedom.Fil: de la Hera, Diego Pablo. Universidad Torcuato Di Tella; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; ArgentinaFil: Zanoni Saad, MarĂ­a BelĂ©n. Universidad Torcuato Di Tella; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; ArgentinaFil: Sigman, Mariano. Universidad Torcuato Di Tella; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; ArgentinaFil: Calero, Cecilia Ines. Universidad Torcuato Di Tella; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentin
    • 

    corecore