316 research outputs found

    Social robots in educational contexts: developing an application in enactive didactics

    Get PDF
    Due to advancements in sensor and actuator technology robots are becoming more and more common in everyday life. Many of the areas in which they are introduced demand close physical and social contact. In the last ten years the use of robots has also increasingly spread to the field of didactics, starting with their use as tools in STEM education. With the advancement of social robotics, the use of robots in didactics has been extended also to tutoring situations in which these \u201csocially aware\u201d robots interact with mainly children in, for example, language learning classes. In this paper we will give a brief overview of how robots have been used in this kind of settings until now. As a result it will become transparent that the majority of applications are not grounded in didactic theory. Recognizing this shortcoming, we propose a theory driven approach to the use of educational robots, centred on the idea that the combination of enactive didactics and social robotics holds great promises for a variety of tutoring activities in educational contexts. After defining our \u201cEnactive Robot Assisted Didactics\u201d approach, we will give an outlook on how the use of humanoid robots can advance it. On this basis, at the end of the paper, we will describe a concrete, currently on-going implementation of this approach, which we are realizing with the use of Softbank Robotics\u2019 Pepper robot during university lectures

    RoboTalk - Prototyping a Humanoid Robot as Speech-to-Sign Language Translator

    Get PDF
    Information science mostly focused on sign language recognition. The current study instead examines whether humanoid robots might be fruitful avatars for sign language translation. After a review of research into sign language technologies, a survey of 50 deaf participants regarding their preferences for potential reveals that humanoid robots represent a promising option. The authors also 3D-printed two arms of a humanoid robot, InMoov, with special joints for the index finger and thumb that would provide it with additional degrees of freedom to express sign language. They programmed the robotic arms with German sign language and integrated it with a voice recognition system. Thus this study provides insights into human–robot interactions in the context of sign language translation; it also contributes ideas for enhanced inclusion of deaf people into society

    Gesture Imitation Learning In Human-robot Interaction

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2012Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2012Bu çalışma, insansı robot ve çocuk arasında sözlü olmayan iletişim ve taklit tabanlı etkileşim oyunları aracılığı ile işitme engelli çocukların İşaret Dili öğrenimine yardımcı olmayı amaçlayan devam eden bir çalışmadır. Bu çalışma farklı makine öğrenme teknikleri kullanarak 5 temel hareketi (jest) veya İşaret Dili hareketlerini taklit ederek öğrenme ile ilgilenmektedir. İnsan iskelet modelini izlemek ve bir eğitim seti oluşturmak için RGBD sensör (Microsoft Kinect) kullanılmıştır. Kural tabanlı hareket tanıma olarak adlandırılan yeni bir yöntem önerilmiştir. Ayrıca, hareketi tanıma da hangi öğrenme metodlarının daha doğru sonuç ürettiğini bulmak için lineer regresyon modelleri karşılaştırılmıştır. Kullanıcılardan alınan 5 farklı hareket Nao Robot tarafından en yüksek doğruluk oranına sahip öğrenme tekniği kullanılarak taklit edilmiştir. Kural tabanlı hareket tanıma yaklaşımı \%96 doğruluk oranına sahiptir. İşaret dili çalışmalarını gerçek robot üzerinde gerçekleştirdikten sonra robot ve insan arasında işaret diline dayalı çocuklara işaret dilinin temel kavramlarını öğretmek için interaktif bir oyun geliştirildi. Oyun Türk İşaret Dili (TSL) için tasarlandı ama Amerikan İşaret Dili (ASL) içinde gerçekleştirilmesi düşünülmektedir. Okuma yazma bilmeyen ve işaret diline aşina olmayan çocuklar için oyun içerisinde robotun kelimeleri sözlü olarak da söylediği ve işaret dili ile algılayibildiği özel olarak şeçilmiş işaret dili kelimeleri kullanıldı. Robot Türkçe bir hikaye anlatıyor. Hikaye içerisinde bazı kelimeleri işaret dili ile anlatıyor ve bekliyor. Çocuk bu kelimeleri tanıyıp uygun resimli kartı gösteriyor. Robot üzerindeki resim tanıma programı sayesinde bunu tanıyor ve eğer doğru resim gösterilmişse kelimenin ismini söylüyor ve hikaye devam ediyor. Doğru resim gösterilmemişse kullandığımız kurguya göre bekliyor ışıklarıyla ya da hareketleriyle bunun yanlış olduğunu ifade ediyor ve çoçuğu bir daha denemesi için teşvik ediyor. Hikayenin sonunda robot işaret dili kelimelerini rastgele olarak teker teker gerçekleyerek çocuktan oyun kartları içerisinden ilgili kelimenin resmini gösteren bilgi kartını ilgili yere yapıştırmasını istiyor. Bu oyunu tablet bilgisayar üzerine videoya dayalı bir şekilde taşıdık. Burdaki amacımız çocukların farklı cisimlerdeki robotlar tarafından işaret dili öğrenme yeteneğini değerlendirmek ve sistemi robot maliyeti göz önüne alınmaksızın, ulaşım ve teknik bilgi konularında çocuklara uygun hale getirebilmektir. Bu çalışma değişmeyen rotasyon ile hareket tanıma için sağlam bir arayüz geliştirmekle başladı. Çalışmada tek bir kamera kullanarak statik el hareketlerini görünüme dayalı algılama ve tanıma gerçekleştirildi. El bölgesini tespit etmek için çeşitli görüntü işleme teknikleri başarıyla uygulandı. El bölgesi başarılı bir şekilde tespit edildikten sonra geometrik tanımlayıcılar ve Fourier tanımlayıcıları çıkarıldı. El hareketi yapay sinir ağları kullanılarak sınıflandırıldı. Bu çalışmanın ana katkısı, sınıflandırma için kullanılan , fourier tanımlayıcılar ve geometrik tanımlayıcı dizilerinden oluşan melez özelliklerdir. Farklı özellik setleri oluşturmak için el tespit ve segmentasyonu için renkli görüntü bölütleme algoritması uygulanmıştır. Önerilen hareket tanıma modeli kendi kendini yöneten otonom bir mobil robotu kontrol etmek için kullanılmıştır. Yöntem farklı el şekilleri üzerinde test edilmiş ve sonuçlar tartışılmıştır. Hareket sınıflandırması için iyi özelliği belirleyen görüntü işleme RGBD özelliğe sahip Microsoft Kinect kamerası kullanılmasıyla birlikte daha kolay hale geldi. Bu çalışma aynı zamanda insan hareketlerini, kinect kamerası için üretilen açık kaynak kodlu openNI (Open Natural Interface) kütüphanesinin sağladığı kalibre edilmiş iskelet görünümünü kullanarak taklit etmektedir. Ayrıca kinect kamerası ile elde edilen iskelet görünümü kinematiğine dayalı olarak eklem açılarının hesaplanmasını ve hesaplanan değerlerin similasyon ortamında Nao Robot üzerinde gerçeklenmesini içermektedir. Nao Rabot similasyonu için Choregraphe kullanılmıştır. Nao robotun serbestlik dereceleri ve kinematik kısıtlamaları temel alınarak hesaplanan eklem açıları (tanınmış hareketler) taklit duygusu yansıtacak şekilde simule edilmiştir. Halen sürmekte olan bu çalışma, duyma özürlü çocuklara insansı robot ve çocuk arasında sözsüz iletişim ve emitasyon tabanlı etkileşim oyunları vasıtasıyla işaret dilini öğretmekte yardımcı olmayı hedeflemektedir. Bu çalışmada yönelinen problem, bir robotun farklı makine öğrenmesi teknikleri kullanarak 5 temel el hareketi veya işaret dili işaretlerini taklit etmeyi öğrenmesidir. RGDB sensörü (Microsoft Kinect) insanların iskelet modelini takip etmekte ve bir öğrenme kümesi oluşturmakta kullanılmıştır. Karar Tabanlı Kural isimli yeni bir metod önerilmiştir. Buna ek olarak, işaret dili tahmininde hangi öğrenme tekniğinin daha yüksek kesinliğe sahip olduğunun belirlenebilmesi için doğrusal regresyon modelleri karşılaştırılmıştır. En yüksek kesinliğe sahip öğrenme tekniği daha sonra kullanıcılar tarafından gözlemlenen Nao Robot’un 5 farklı işaret dilini taklit ettiği sistemi simule etmekte kullanıldı. Karar Tabanlı Kural method %96 kesinlik değerine sahiptir. Bunların yanında, bu çalışmada ayrıca bir NAO H25 insansı robot ile okul öncesi çocuğun işaret dili tabanlı, etkileşimli bir oyun önerilmiştir. Şu anda demo Türk İşaret dilindedir ancak ASL için de genişletilecektir. Çocuklar okuma yazma bilmediği ve işaret diline alışkın olmadığı için, robotun işaret dili ile tanıdığı ve sözlü olarak telaffuz ettiği özel seçilmiş kelimeleri içeren kısa bir hikaye hazırladık. Her özel kelimeyi bir işaret dili ile tanıdıktan sonra, robot çocuktan yanıt bekler. Çocuktan kelimenin ilustrasyonu ile renkli okuma fişini göstermesi istenmiştir. Eğer okuma fişi ile kelime eşleşirse robot kelimeyi sözlü olarak telaffuz eder ve hikayeyi anlatmaya devam eder. Hikayenin sonunda, robot kelimeleri rastgele olarak tek tek işaret dilinde anlar ve çocuktan, okuma fişlerinin ilüstrasyonları ile hikayeyi içeren oyun kartları üzerindeki ilgili okuma fişine etiket koymasını ister. Oyunu ayrıca internet eve tablet kişisel bilgisayarlara da koyduk. Amaç çocukların işaret dili öğrenme yeteneğini farklı şekillerde ölçmek ve sistemi robotun masrafı, taşıması ve teknik bilgisi gibi konuları gözardı ederek çocuklara uygun hale getirmektir. Bu çalışma, işaret dili sabitlerinin rotasyonu için güvenilir bir arayüz geliştirmekten başlamıştır. Görüntü tabanlı saptama ve sabit el hareketlerinin tek bir kamera kullanarak tanınmasını içerir. El bölgesinin başarı ile saptanması için pek çok görüntü işleme tekniği kullanılmıştır. El bölgesi başarılı bir şekilde saptandıktan sonar geometric tanımlayıcılar ve Fourier tanımlayıcılar çıkartılır. İşaretler yapay sinir ağları kullanılarak sınıflandırılır. Bu çalışmanın ana katkısı sınıflandırma için kullanılan özelliklerdir. Bu çalışmada tanıtılan özellikler, Fourier tanımlayıcıları içeren hybrid bir özellik kümesi ve geometric tanımlayıcıları içeren bir kümedir. Farklı özellik kümeleri yaratmak için eli tesbit edip bölmelemekte bir renkli görüntü bölmeleme algoritması uygulanmıştır. Sunulan işaret tanıma modeli otonom, taşınabilir bir robotu kontrol etmek için kullanılmıştır. Bu method ayrıca farklı el şekilleri üzerinde test edilmiş ve sonuçlar tartışılmıştır. Kinect gibi bir RGBD kameranın kullanılabilirliği ile işaret sınıflandırma için iyi özellikler belirleme daha kolay hale gelmiştir. Bu çalışma ayrıca Kinect kameraya bağlı openNI dan türetilen ayarlanabilir iskelet görünüsü kullanarak insan hareketleri imitasyonunu sunmaktadır. Bu çalışma Kinect kamera ile ilgili iskelet görüntüsünün devimbilimsellerine dayalı ortak açıların hesaplanması için bir sistem sağlar. Bu çalışmada, Nao robotu simule etmesi için Choregraphe kullanılmıştır. Nao’nun özgürlük derecesine devimbilimsel kısıtlamalarına dayanarak tahmin edilen ortak açılar, imitasyon hissinin yansıtılmasını simule etmektedir.This is an on-going study and part of a project which aims to assist in teaching Sign Language (SL) to hearing-impaired children by means of non-verbal communication and imitation-based interaction games between a humanoid robot and a child. In this paper, the problem is geared towards a robot learning to imitate basic upper torso gestures (SL signs) using different machine learning techniques. RGBD sensor (Microsoft Kinect) is employed to track the skeletal model of humans and create a training set. A novel method called Decision Based Rule is proposed. Additionally, linear regression models are compared to find which learning technique has a higher accuracy on gesture prediction. The learning technique with the highest accuracy is then used to simulate an imitation system where the Nao Robot imitates these learned gestures as observed by the users. Decision Based Rule had a 96% accuracy in prediction. Futher more, this study also proposes an interactive game between a NAO H25 humanoid robot and preschool children based on Sign Language. Currently the demo is in Turkish Sign Language (TSL) but it will be extended to ASL, too. Since the children do not know how to read and write, and are not familiar with sign language, we prepared a short story including special words where the robot realized the specially selected word with sign language as well as pronouncing the word verbally. After recognizing every special word with sign language the robot waited for response from children, where the children were asked to show colour flashcards with the illustration of the word. If the flashcard and the word match the robot pronounces the word verbally and continues to tell the story. At the end of the story the robot realizes the words one by one with sign language in a random order and asks the children to put the sticker of the relevant flashcard on their play cards which include the story with illustrations of the flashcards. We also carried the game to internet and tablet pc environments. The aim is to evaluate the children’s sign language learning ability from a robot, in different embodiments and make the system available to children disregarding the cost of the robot, transportation and knowhow issues. This study started from developing a robust interface for rotation invariant Gesture Recognition. It involves the view-based detection and recognition of static hand gestures by using a single camera. Several image processing techniques are used to detect the hand region successfully. After the hand region is successfully detected geometric descriptors and Fourier descriptors are extracted. The gesture is classified using neural network. The main contribution of this study is the features used for classification, a hybrid feature set consisting of Fourier Descriptors and a set of Geometric Descriptors which are introduced in this study. A color image segmentation algorithm is implemented to detect and segment the hand to create different feature sets. The proposed gesture recognition model has been used to control an autonomous mobile robot. The method has also been tested on different hand shapes and the result has been discussed. With the availability of RGBD camera like kinect, the job on image processing in determining a good feature for gesture classification became easier. This study also presents human motion imitation using the calibrated skeletal view derived from openNI {\bf (open Natural Interface) } connected to the Kinect camera. This study provides a system for computing the joint angles based on the kinematics of the skeletal view relative to the Kinect camera and these values are passed to a Nao robot simulated environment. In this study Choregraphe is used to simulate the Nao robot. Based on Nao s degree of freedom and kinematic constraints, estimated joint angles (recognized getures) are simulated to reflect a sense of imitation.Yüksek LisansM.Sc

    Multimodal Dialogue Management for Multiparty Interaction with Infants

    Full text link
    We present dialogue management routines for a system to engage in multiparty agent-infant interaction. The ultimate purpose of this research is to help infants learn a visual sign language by engaging them in naturalistic and socially contingent conversations during an early-life critical period for language development (ages 6 to 12 months) as initiated by an artificial agent. As a first step, we focus on creating and maintaining agent-infant engagement that elicits appropriate and socially contingent responses from the baby. Our system includes two agents, a physical robot and an animated virtual human. The system's multimodal perception includes an eye-tracker (measures attention) and a thermal infrared imaging camera (measures patterns of emotional arousal). A dialogue policy is presented that selects individual actions and planned multiparty sequences based on perceptual inputs about the baby's internal changing states of emotional engagement. The present version of the system was evaluated in interaction with 8 babies. All babies demonstrated spontaneous and sustained engagement with the agents for several minutes, with patterns of conversationally relevant and socially contingent behaviors. We further performed a detailed case-study analysis with annotation of all agent and baby behaviors. Results show that the baby's behaviors were generally relevant to agent conversations and contained direct evidence for socially contingent responses by the baby to specific linguistic samples produced by the avatar. This work demonstrates the potential for language learning from agents in very young babies and has especially broad implications regarding the use of artificial agents with babies who have minimal language exposure in early life

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD

    Socially assistive robots : the specific case of the NAO

    Get PDF
    Numerous researches have studied the development of robotics, especially socially assistive robots (SAR), including the NAO robot. This small humanoid robot has a great potential in social assistance. The NAO robot’s features and capabilities, such as motricity, functionality, and affective capacities, have been studied in various contexts. The principal aim of this study is to gather every research that has been done using this robot to see how the NAO can be used and what could be its potential as a SAR. Articles using the NAO in any situation were found searching PSYCHINFO, Computer and Applied Sciences Complete and ACM Digital Library databases. The main inclusion criterion was that studies had to use the NAO robot. Studies comparing it with other robots or intervention programs were also included. Articles about technical improvements were excluded since they did not involve concrete utilisation of the NAO. Also, duplicates and articles with an important lack of information on sample were excluded. A total of 51 publications (1895 participants) were included in the review. Six categories were defined: social interactions, affectivity, intervention, assisted teaching, mild cognitive impairment/dementia, and autism/intellectual disability. A great majority of the findings are positive concerning the NAO robot. Its multimodality makes it a SAR with potential

    User Experience Design and Evaluation of Persuasive Social Robot As Language Tutor At University : Design And Learning Experiences From Design Research

    Get PDF
    Human Robot Interaction (HRI) is a developing field where research and innovation are progressing. One domain where Human Robot Interaction has focused is in the educational sector. Various research has been conducted in education field to design social robots with appropriate design guidelines derived from user preferences, context, and technology to help students and teachers to foster their learning and teaching experience. Language learning has become popular in education due to students receiving opportunities to study and learn any interested subjects in any language in their preferred universities around the world. Thus, being the reason behind the research of using social robots in language learning and teaching in education field. To this context this thesis explored the design of language tutoring robot for students learning Finnish language at university. In language learning, motivation, the learning experience, context, and user preferences are important to be considered. This thesis focuses on the Finnish language learning students through language tutoring social robot at Tampere University. The design research methodology is used to design the persuasive language tutoring social robot teaching Finnish language to the international students at Tampere University. The design guidelines and the future language tutoring robot design with their benefits are formed using Design Research methodology. Elias Robot, a language tutoring application designed by Curious Technologies, Finnish EdTech company was used in the explorative user study. The user study involved Pepper, Social robot along with the Elias robot application using Mobile device technology. The user study was conducted in university, the students include three male participants and four female participants. The aim of the study was to gather the design requirements based on learning experiences from social robot tutor. Based on this study findings and the design research findings, the future language tutoring social robot was co-created through co design workshop. Based on the findings from Field study, user study, technology acceptance model findings, design research findings, student interviews, the persuasive social robot language tutor was designed. The findings revealed all the multi modalities are required for the efficient tutoring of persuasive social robots and the social robots persuade motivation with students to learn the language. The design implications were discussed, and the design of social robot tutor are created through design scenarios

    Staying engaged in child-robot interaction:A quantitative approach to studying preschoolers’ engagement with robots and tasks during second-language tutoring

    Get PDF
    Inleiding Covid-19 heeft laten zien dat onze traditionele manier van lesgeven steeds meer afhankelijk is van digitale hulpmiddelen. In de afgelopen jaren (2020-2021) hebben leerkrachten kinderen online les moeten geven en hebben ouders hun kinderen moeten begeleiden bij hun lesactiviteiten. Digitale instrumenten die het onderwijs kunnen ondersteunen zoals sociale robots, zouden uiterst nuttig zijn geweest voor leerkrachten. Robots die, in tegenstelling tot tablets, hun lichaam kunnen gebruiken om zich vergelijkbaar te gedragen als leerkrachten. Bijvoorbeeld door te gebaren tijdens het praten, waardoor kinderen zich beter kunnen concentreren wat een voordeel oplevert voor hun leerprestaties. Bovendien stellen robots, meer dan tablets, kinderen in staat tot een sociale interactie, wat vooral belangrijk is bij het leren van een tweede taal (L2). Hierover ging mijn promotietraject wat onderdeel was van het Horizon 2020 L2TOR project1, waarin zes verschillende universiteiten en twee bedrijven samenwerkten en onderzochten of een robot aan kleuters woorden uit een tweede taal kon leren. Een van de belangrijkste vragen in dit project was hoe we gedrag van de robot konden ontwikkelen dat kinderen betrokken (engaged) houdt. Betrokkenheid van kinderen is belangrijk zodat zij tijdens langere tijdsperiodes met de robot aan de slag willen. Om deze vraag te beantwoorden, heb ik meerdere studies uitgevoerd om het effect van de robot op de betrokkenheid van kinderen met de robot te onderzoeken, alsmede onderzoek te doen naar de perceptie die de kinderen van de robot hadden. 1Het L2TOR project leverde een grote bijdrage binnen het mens-robot interactie veld in de beweging richting publieke wetenschap. Alle L2TOR publicaties, de project deliverables, broncode en data zijn openbaar gemaakt via de website www.l2tor.eu en via www.github.nl/l2tor en de meeste studies werden vooraf geregistreerd
    corecore