2,363 research outputs found

    Robot-Aided Learning and r-Learning Services

    Get PDF

    SANTO: Social Aerial NavigaTion in Outdoors

    Get PDF
    In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones

    Learning-by-Teaching in CS Education: A Systematic Review

    Get PDF
    To investigate the strategies and approaches in teaching Computer Science (CS), we searched the literature review in CS education in the past ten years. The reviews show that learning-by-teaching with the use of technologies is helpful for improving student learning. To further investigate the strategies that are applied to learning-by-teaching, three categories are identified: peer tutoring, game-based flipped classroom, and teachable agents. In each category, we further searched and investigated prior studies. The results reveal the effectiveness and challenges of each strategy and provide insights for future studies

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD

    Virtual laboratories for education in science, technology, and engineering: A review

    Get PDF
    Within education, concepts such as distance learning, and open universities, are now becoming more widely used for teaching and learning. However, due to the nature of the subject domain, the teaching of Science, Technology, and Engineering are still relatively behind when using new technological approaches (particularly for online distance learning). The reason for this discrepancy lies in the fact that these fields often require laboratory exercises to provide effective skill acquisition and hands-on experience. Often it is difficult to make these laboratories accessible for online access. Either the real lab needs to be enabled for remote access or it needs to be replicated as a fully software-based virtual lab. We argue for the latter concept since it offers some advantages over remotely controlled real labs, which will be elaborated further in this paper. We are now seeing new emerging technologies that can overcome some of the potential difficulties in this area. These include: computer graphics, augmented reality, computational dynamics, and virtual worlds. This paper summarizes the state of the art in virtual laboratories and virtual worlds in the fields of science, technology, and engineering. The main research activity in these fields is discussed but special emphasis is put on the field of robotics due to the maturity of this area within the virtual-education community. This is not a coincidence; starting from its widely multidisciplinary character, robotics is a perfect example where all the other fields of engineering and physics can contribute. Thus, the use of virtual labs for other scientific and non-robotic engineering uses can be seen to share many of the same learning processes. This can include supporting the introduction of new concepts as part of learning about science and technology, and introducing more general engineering knowledge, through to supporting more constructive (and collaborative) education and training activities in a more complex engineering topic such as robotics. The objective of this paper is to outline this problem space in more detail and to create a valuable source of information that can help to define the starting position for future research

    Autonomous decision-making for socially interactive robots

    Get PDF
    Mención Internacional en el título de doctorThe aim of this thesis is to present a novel decision-making system based on bio-inspired concepts to decide the actions to make during the interaction between humans and robots. We use concepts from nature to make the robot may behave analogously to a living being for a better acceptance by people. The system is applied to autonomous Socially Interactive Robots that works in environments with users. These objectives are motivated by the need of having robots collaborating, entertaining or helping in educational tasks for real situations with children or elder people where the robot has to behave socially. Moreover, the decision-making system can be integrated into this kind of robots in order to learn how to act depending on the user profile the robot is interacting with. The decision-making system proposed in this thesis is a solution to all these issues in addition to a complement for interactive learning in HRI. We also show real applications of the system proposed applying it in an educational scenario, a situation where the robot can learn and interact with different kinds of people. The last goal of this thesis is to develop a robotic architecture that is able to learn how to behave in different contexts where humans and robots coexist. For that purpose, we design a modular and portable robotic architecture that is included in several robots. Including well-known software engineering techniques together with innovative agile software development procedures that produces an easily extensible architecture.El objetivo de esta tesis es presentar un novedoso sistema de toma de decisiones basado en conceptos bioinspirados para decidir las acciones a realizar durante la interacción entre personas y robots. Usamos conceptos de la naturaleza para hacer que el robot pueda comportarse análogamente a un ser vivo para una mejor aceptación por las personas. El sistema está desarrollado para que se pueda aplicar a los llamados Robots Socialmente Interactivos que están destinados a entornos con usuarios. Estos objetivos están motivados por la necesidad de tener robots en tareas de colaboración, entretenimiento o en educación en situaciones reales con niños o personas mayores en las cuales el robot debe comportarse siguiendo las normas sociales. Además, el sistema de toma de decisiones es integrado en estos tipos de robots con el fin de que pueda aprender a actuar dependiendo del perfil de usuario con el que el robot está interactuando. El sistema de toma de decisiones que proponemos en esta tesis es una solución a todos estos desafíos además de un complemento para el aprendizaje interactivo en la interacción humano-robot. También mostramos aplicaciones reales del sistema propuesto aplicándolo en un escenario educativo, una situación en la que el robot puede aprender e interaccionar con diferentes tipos de personas. El último objetivo de esta tesis es desarrollar un arquitectura robótica que sea capaz de aprender a comportarse en diferentes contextos donde las personas y los robots coexistan. Con ese propósito, diseñamos una arquitectura robótica modular y portable que está incluida en varios robots. Incluyendo técnicas bien conocidas de ingeniería del software junto con procedimientos innovadores de desarrollo de sofware ágil que producen una arquitectura fácilmente extensible.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Fabio Bonsignorio.- Secretario: María Dolores Blanco Rojas.- Vocal: Martin Stoele

    Mirroring and recognizing emotions through facial expressions for a Robokind platform

    Get PDF
    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e ComputadoresFacial expressions play an important role during human social interaction, enabling communicative cues, ascertaining the level of interest or signalling the desire to take a speaking turn. They also give continuous feedback indicating that the information conveyed has been understood. However, certain individuals have difficulties in social interaction in particular verbal and non-verbal communication (e.g. emotions and gestures). Autism Spectrum Disorders (ASD) are a special case of social impairments. Individuals that are affected with ASD are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots had already been proven to encourage the promotion of social interaction and skills in children with ASD. Following this trend, in this work a robotic platform is used as a mediator in the social interaction activities with children with special needs. The main purpose of this dissertation is to develop a system capable of automatic detecting emotions through facial expressions and interfacing it with a robotic platform in order to allow social interaction with children with special needs. The proposed experimental setup uses the Intel RealSense 3D camera and the Zeno R50 Robokind robotic platform. This layout has two subsystems, a Mirroring Emotion System (MES) and an Emotion Recognition System (ERS). The first subsystem (MES) is capable of synthetizing human emotions through facial expressions, on-line. The other subsystem (ERS) is able to recognize human emotions through facial features in real time. MES extracts the user facial Action Units (AUs), sends the data to the robot allowing on-line imitation. ERS uses Support Vector Machine (SVM) technique to automatic classify the emotion expressed by the User in real time. Finally, the proposed subsystems, MES and ERS, were evaluated in a laboratorial and controlled environment in order to check the integration and operation of the systems. Then, both subsystems were tested in a school environment in different configurations. The results of these preliminary tests allowed to detect some constraints of the system, as well as validate its adequacy in an intervention setting.As expressões faciais desempenham um papel importante na interação social, permitindo fornecer pistas comunicativas, conhecer o nível de interesse ou sinalizar o desejo de falar. No entanto, algumas pessoas têm dificuldades na interação social, em particular, na comunicação verbal e não-verbal (por exemplo, emoções e gestos). As Perturbações do Espectro do Autismo (PEA) são um caso especial de transtorno e dificuldades sociais. Os indivíduos que são afetados com PEA são caracterizados por padrões repetitivos de comportamento, atividades e interesses restritos e possuem deficiências na comunicação social. A utilização de robôs para incentivar a promoção da interação social e habilidades em crianças com PEA tem sido apresentada na literatura. Seguindo essa tendência, neste trabalho uma plataforma robótica é utilizada como um mediador nas atividades de interação social com crianças com necessidades especiais. O objetivo principal desta dissertação é desenvolver um sistema capaz de detetar automaticamente emoções através de expressões faciais e fazer interface com uma plataforma robótica, a fim de permitir uma interação social com crianças com necessidades especiais. O trabalho experimental proposto utiliza a câmara Intel RealSense 3D e a plataforma robótica Zeno R50 Robokind. Este esquema possui dois subsistemas, um sistema de imitação de expressões faciais (MES) e um sistema de reconhecimentos de emoções (ERS). O primeiro subsistema (MES) é capaz de sintetizar on-line as emoções humanas através de expressões faciais. O subsistema ERS é capaz de reconhecer em tempo-real emoções humanas através de características faciais. O MES extrai as Unidades de Ação faciais do utilizador (UAs), envia os dados para o robô permitindo imitação on-line. O ERS utiliza Support Vector Machine (SVM) para automaticamente classificar a emoção exibida pelo utilizador. Finalmente, os subsistemas propostos, MES e ERS, foram avaliados num ambiente laboratorial e controlado, a fim de verificar a integração e a operação de ambos. Em seguida, os subsistemas foram testados num ambiente escolar em diferentes configurações. Os resultados destes testes preliminares permitiram detetar algumas limitações do sistema, bem como validar a sua adequação na intervenção com crianças com necessidades especiais

    Behavioural attentiveness patterns analysis – detecting distraction behaviours

    Get PDF
    The capacity of remaining focused on a task can be crucial in some circumstances. In general, this ability is intrinsic in a human social interaction and it is naturally used in any social context. Nevertheless, some individuals have difficulties in remaining concentrated in an activity, resulting in a short attention span. Children with Autism Spectrum Disorder (ASD) are a special example of such individuals. ASD is a group of complex developmental disorders of the brain. Individuals affected by this disorder are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots has already proved to encourage the developing of social interaction skills lacking in children with ASD. However, most of these systems are controlled remotely and cannot adapt automatically to the situation, and even those who are more autonomous still cannot perceive whether or not the user is paying attention to the instructions and actions of the robot. Following this trend, this dissertation is part of a research project that has been under development for some years. In this project, the Robot ZECA (Zeno Engaging Children with Autism) from Hanson Robotics is used to promote the interaction with children with ASD helping them to recognize emotions, and to acquire new knowledge in order to promote social interaction and communication with the others. The main purpose of this dissertation is to know whether the user is distracted during an activity. In the future, the objective is to interface this system with ZECA to consequently adapt its behaviour taking into account the individual affective state during an emotion imitation activity. In order to recognize human distraction behaviours and capture the user attention, several patterns of distraction, as well as systems to automatically detect them, have been developed. One of the most used distraction patterns detection methods is based on the measurement of the head pose and eye gaze. The present dissertation proposes a system based on a Red Green Blue (RGB) camera, capable of detecting the distraction patterns, head pose, eye gaze, blinks frequency, and the user to position towards the camera, during an activity, and then classify the user's state using a machine learning algorithm. Finally, the proposed system is evaluated in a laboratorial and controlled environment in order to verify if it is capable to detect the patterns of distraction. The results of these preliminary tests allowed to detect some system constraints, as well as to validate its adequacy to later use it in an intervention setting.A capacidade de permanecer focado numa tarefa pode ser crucial em algumas circunstâncias. No geral, essa capacidade é intrínseca numa interação social humana e é naturalmente usada em qualquer contexto social. No entanto, alguns indivíduos têm dificuldades em permanecer concentrados numa atividade, resultando num curto período de atenção. Crianças com Perturbações do Espectro do Autismo (PEA) são um exemplo especial de tais indivíduos. PEA é um grupo de perturbações complexas do desenvolvimento do cérebro. Os indivíduos afetados por estas perturbações são caracterizados por padrões repetitivos de comportamento, atividades ou interesses restritos e deficiências na comunicação social. O uso de robôs já provaram encorajar a promoção da interação social e ajudaram no desenvolvimento de competências deficitárias nas crianças com PEA. No entanto, a maioria desses sistemas é controlada remotamente e não consegue-se adaptar automaticamente à situação, e mesmo aqueles que são mais autônomos ainda não conseguem perceber se o utilizador está ou não atento às instruções e ações do robô. Seguindo esta tendência, esta dissertação é parte de um projeto de pesquisa que vem sendo desenvolvido há alguns anos, onde o robô ZECA (Zeno Envolvendo Crianças com Autismo) da Hanson Robotics é usado para promover a interação com crianças com PEA, ajudando-as a reconhecer emoções, adquirir novos conhecimentos para promover a interação social e comunicação com os pares. O principal objetivo desta dissertação é saber se o utilizador está distraído durante uma atividade. No futuro, o objetivo é fazer a interface deste sistema com o ZECA para, consequentemente, adaptar o seu comportamento tendo em conta o estado afetivo do utilizador durante uma atividade de imitação de emoções. A fim de reconhecer os comportamentos de distração humana e captar a atenção do utilizador, vários padrões de distração, bem como sistemas para detetá-los automaticamente, foram desenvolvidos. Um dos métodos de deteção de padrões de distração mais utilizados baseia-se na medição da orientação da cabeça e da orientação do olhar. A presente dissertação propõe um sistema baseado numa câmera Red Green Blue (RGB), capaz de detetar os padrões de distração, orientação da cabeça, orientação do olhar, frequência do piscar de olhos e a posição do utilizador em frente da câmera, durante uma atividade, e então classificar o estado do utilizador usando um algoritmo de “machine learning”. Por fim, o sistema proposto é avaliado num ambiente laboratorial, a fim de verificar se é capaz de detetar os padrões de distração. Os resultados destes testes preliminares permitiram detetar algumas restrições do sistema, bem como validar a sua adequação para posteriormente utilizá-lo num ambiente de intervenção
    corecore