1,161 research outputs found

    Bringing tabletop technologies to kindergarten children

    Get PDF
    Taking computer technology away from the desktop and into a more physical, manipulative space, is known that provide many benefits and is generally considered to result in a system that is easier to learn and more natural to use. This paper describes a design solution that allows kindergarten children to take the benefits of the new pedagogical possibilities that tangible interaction and tabletop technologies offer for manipulative learning. After analysis of children's cognitive and psychomotor skills, we have designed and tuned a prototype game that is suitable for children aged 3 to 4 years old. Our prototype uniquely combines low cost tangible interaction and tabletop technology with tutored learning. The design has been based on the observation of children using the technology, letting them freely play with the application during three play sessions. These observational sessions informed the design decisions for the game whilst also confirming the children's enjoyment of the prototype

    Between the Lines: documenting the multiple dimensions of computer supported collaborations

    Get PDF
    When we consider the possibilities for the design and evaluation of Computer Supported Collaborative Learning (CSCL) we probably constrain the CS in CSCL to situations in which learners, or groups of learners collaborate with each other around a single computer, across a local intranet or via the global internet. We probably also consider situations in which the computer itself acts as a collaborative partner giving hints and tips either with or without the addition of an animated pedagogical agent. However, there are now many possibilities for CSCL applications to be offered to learners through computing technology that is something other than a desktop computer, such as the TV or a digital toy. In order to understand how such complex and novel interactions work, we need tools to map out the multiple dimensions of collaboration using a whole variety of technologies. This paper discusses the evolution of a documentation technique for collaborative interactions from its roots in a situation where a single learner is collaborating with a software learning partner, through its second generation: group use of multimedia, to its current test-bed: young children using digital toys and associated software. We will explore some of the challenges these different learning situations pose for those involved in the evaluation of collaborative learning

    Hacia una educación inclusiva y personalizada mediante el uso de los sistemas de diálogo multimodal

    Get PDF
    Los continuos avances en el desarrollo de tecnologías de la información han dado lugar actualmente a la posibilidad de acceder a los contenidos educativos desde cualquier lugar, en cualquier momento y de forma casi instantánea. Sin embargo, la accesibilidad no es siempre considerada como criterio principal en el diseño de aplicaciones educativas, especialmente para facilitar su utilización por parte de personas con discapacidad. Diferentes tecnologías han surgido recientemente para fomentar la accesibilidad a las nuevas tecnologías y dispositivos móviles, favoreciendo una comunicación más natural con los sistemas educativos. En este artículo se describe el uso innovador de los sistemas de diálogo multimodales en el campo de la educación, con un especial énfasis en la descripción de las ventajas que ofrecen para la creación de aplicaciones educativas inclusivas y adaptadas a la evolución de los estudiantes.Continuous advances in the development of information technologies have currently led to the possibility of accessing learning contents from anywhere, at anytime and almost instantaneously. However, accessibility is not always the main objective in the design of educative applications, specifically to facilitate their adoption by disabled people. Different technologies have recently emerged to foster the accessibility of computers and new mobile devices favouring a more natural communication between the student and the developed educative systems. This paper describes innovative uses of multimodal dialog systems in education, with special emphasis in the advantages that they provide for creating inclusive applications and adapted to the students specific evolution.Trabajo parcialmente financiado por los proyectos MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485) y TRA2010-20225-C03-01.Publicad

    04121 Abstracts Collection -- Evaluating Embodied Conversational Agents

    Get PDF
    From 14.03.04 to 19.03.04, the Dagstuhl Seminar 04121 ``Evaluating Embodied Conversational Agents\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Comparing Photorealistic and Animated Embodied Conversational Agents in Serious Games: An Empirical Study on User Experience

    Full text link
    Embodied conversational agents (ECAs) are paradigms of conversational user interfaces in the form of embodied characters. While ECAs offer various manipulable features, this paper focuses on a study conducted to explore two distinct levels of presentation realism. The two agent versions are photorealistic and animated. The study aims to provide insights and design suggestions for speech-enabled ECAs within serious game environments. A within-subjects, two-by-two factorial design was employed for this research with a cohort of 36 participants balanced for gender. The results showed that both the photorealistic and the animated versions were perceived as highly usable, with overall mean scores of 5.76 and 5.71, respectively. However, 69.4 per cent of the participants stated they preferred the photorealistic version, 25 per cent stated they preferred the animated version and 5.6 per cent had no stated preference. The photorealistic agents were perceived as more realistic and human-like, while the animated characters made the task feel more like a game. Even though the agents' realism had no significant effect on usability, it positively influenced participants' perceptions of the agent. This research aims to lay the groundwork for future studies on ECA realism's impact in serious games across diverse contexts.Comment: 21 pages, 14 figures, preprint to be published in HCI INTERNATIONAL 2023 25TH INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION proceeding

    Example-based learning: Integrating cognitive and social-cognitive research perspectives

    Get PDF
    Example-based learning has been studied from different perspectives. Cognitive research has mainly focused on worked examples, which typically provide students with a written worked-out didactical solution to a problem to study. Social-cognitive research has mostly focused on modeling examples, which provide students the opportunity to observe an adult or a peer model performing the task. The model can behave didactically or naturally, and the observation can take place face to face, on video, as a screen recording of the model's computer screen, or as an animation. This article reviews the contributions of the research on both types of example-based learning on questions such as why example-based learning is effective, for what kinds of tasks and learners it is effective, and how examples should be designed and delivered to students to optimize learning. This will show both the commonalities and the differences in research on example-based learning conducted from both perspectives and might inspire the identification of new research questions

    Teachers’ Views on the Use of Empathic Robotic Tutors in the Classroom

    Get PDF
    In this paper, we describe the results of an interview study conducted across several European countries on teachers’ views on the use of empathic robotic tutors in the classroom. The main goals of the study were to elicit teachers’ thoughts on the integration of the robotic tutors in the daily school practice, understanding the main roles that these robots could play and gather teachers’ main concerns about this type of technology. Teachers’ concerns were much related to the fairness of access to the technology, robustness of the robot in students’ hands and disruption of other classroom activities. They saw a role for the tutor in acting as an engaging tool for all, preferably in groups, and gathering information about students’ learning progress without taking over the teachers’ responsibility for the actual assessment. The implications of these results are discussed in relation to teacher acceptance of ubiquitous technologies in general and robots in particular

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD
    corecore