17 research outputs found
Studying Eye Gaze of Children with Autism Spectrum Disorders in Interaction with a Social Robot
Children with Autism Spectrum Disorders (ASDs) experience deficits in verbal and nonverbal communication skills including motor control, emotional facial expressions, and eye gaze attention. In this thesis, we focus on studying the feasibility and effectiveness of using a social robot, called NAO, at modeling and improving the social responses and behaviors of children with autism. In our investigation, we designed and developed two protocols to fulfill this mission. Since eye contact and gaze responses are important non-verbal cues in human\u27s social communication and as the majority of individuals with ASD have difficulties regulating their gaze responses, in this thesis we have mostly focused on this area.
In Protocol 1 (eye gaze duration and shifting frequency are analyzed in this protocol), we designed two social games (i.e. NAO Spy and Find the Suspect) and recruited 21 subjects (i.e. 14 ASD and seven Typically Developing (TD) children) ages between 7-17 years old to interact with NAO. All sessions were recorded using cameras and the videos were used for analysis. In particular, we manually annotated the eye gaze direction of children (i.e. gaze averted `0\u27 or gaze at robot `1\u27) in every frame of the videos within two social contexts (i.e. child speaking and child listening). Gaze fixation and gaze shifting frequency are analyzed, where both patterns are significantly improved or changed (more than half of the participants increased the eye contact duration time and decrease the eye gaze shifting during both games). The results confirms that the TD group has more gaze fixation as they are listening (71%) than while they are speaking (37%). However there is no significant difference between the average gaze fixations of ASD group.
Besides using the statistical measures (i.e. gaze fixation and shifting), we statistically modeled the gaze responses of both groups (TD and ASD) using Markov models (e.g. Hidden Markov Model (HMM) and Variable-order Markov Model (VMM)). Using Markov based modeling allows us to analyze the sequence of gaze direction of ASD and TD groups for two social conversational sessions (Child Speaking and Listening). The results of our experiments show that for the `Child Speaking\u27 segments, HMM can distinguish and recognize the differences of gaze patterns of TD and ASD groups accurately (79%). In addition, to evaluate the effect of history of eye gaze in the gaze responses, the VMM technique was employed to model the effects of different length of sequential data. The results of VMM demonstrate that, in general, the first order system (VMM with order D=1) can reliably represent the differences between the gaze patterns of TD and ASD group. Besides that, the experimental results confirm that VMM is more reliable and accurate for modeling the gaze responses of Child Listening sessions than the Child Speaking one.
Protocol 2 contains five sub-sessions targeted intervention of different social skills: verbal communication, joint attention, eye gaze attention, facial expressions recognition/imitation. The objective of this protocol is to provide intervention sessions based on the needs of children diagnosed with ASD. Therefore each participant attended in three times of baseline sessions for evaluate his/her existing social skill and behavioral response, when the study began. In this protocol the behavioral responses of every child is recorded in each intervention session where feedbacks are focused on improving their social skills if they lack one. For example if they are not good at recognizing facial expression, we give them feedback on how every facial expression looks like and ask them to recognize them correctly while we do not feedback on other social skills. Our experimental results show that customizing the human-robot interaction would improve the social skills of children with ASD
Xylo-Bot: A Therapeutic Robot-Based Music Platform for Children with Autism
Children with Autism Spectrum Disorder (ASD) experience deficits in verbal and nonverbal communication skills, including motor control, emotional facial expressions, and eye gaze / joint attention. This Ph.D. dissertation focuses on studying the feasibility and effectiveness of using a social robot, called NAO, and a toy music instrument, xylophone, at modeling and improving the social responses and behaviors of children with ASD. In our investigation, we designed an autonomous social interactive music teaching system to fulfill this mission.
A novel modular robot-music teaching system consisting of three modules is presented. Module 1 provides an autonomous self-awareness positioning system for the robot to localize the instrument and make a micro adjustment for the arm joints to play the note bars properly. Module 2 allows the robot to be able to play any customized song per userâs request. This design provides an opportunity to translate songs into C-major or a-minor with a set of hexadecimal numbers without music experience. After the music score converted robot should be able to play it immediately. Module 3 is designed for providing real-life music teaching experience for the users. Two key features of this module are a) music detection and b) smart scoring and feedback . Short-time Fourier transform and Levenshtein distance are adapted to fulfill the design requirements, which allow the robot to understand music and provide a proper dosage of practice and oral feedback to users. A new instrument has designed to present better emotions from music due to the limitation of the original xylophone. This new programmable xylophone can provide a more extensive frequency range of notes, easily switch between the Major and Minor keys, extensively easy to control, and have fun with it as an advanced music instrument.
Because our initial intention has been to study emotion in children with autism, an automated method for emotion classification in children using electrodermal activity (EDA) signals. The time-frequency analysis of the acquired raw EDAs provides a feature space based on which different emotions can be recognized. To this end, the complex Morlet (C-Morlet) wavelet function is applied to the recorded EDA signals. The dataset used in this research includes a set of multimodal recordings of social and communicative behavior as well as EDA recordings of 100 children younger than 30 months old. The dataset is annotated by two experts to extract the time sequence corresponding to three primary emotions, including âJoyâ, âBoredomâ, and âAcceptanceâ. Various experiments are conducted on the annotated EDA signals to classify emotions using a support vector machine (SVM) classifier. The quantitative results show that emotion classification performance remarkably improves compared to other methods when the proposed wavelet-based features are used. By using this emotion classification, emotion engagement during sessions, and feelings between different music can be detected after data analysis.
NAO music education platform will be thought-about as a decent tool to facilitate improving fine motor control, turn-taking skills, and social activities engagement. Most of the ASD youngsters began to develop the strike movement within the two initial intervention sessions; some even mastered the motor ability throughout the early events. More than half of the subjects could dominate proper turn-taking after few sessions. Music teaching is a good example for accomplishing social skill tasks by taking advantage of customized songs selected by individuals. According to researcher and video annotator, majority of the subjects showed high level of engagement for all music game activities, especially with the free play mode. Based on the conversation and music performance with NAO, subjects showed strong interest in challenging the robot with a friendly way
A Music-Therapy Robotic Platform for Children with Autism: A Pilot Study
Children with Autism Spectrum Disorder (ASD) experience deficits in verbal and nonverbal communication skills including motor control, turn-taking, and emotion recognition. Innovative technology, such as socially assistive robots, has shown to be a viable method for Autism therapy. This paper presents a novel robot-based music-therapy platform for modeling and improving the social responses and behaviors of children with ASD. Our autonomous social interactive system consists of three modules. Module one provides an autonomous initiative positioning system for the robot, NAO, to properly localize and play the instrument (Xylophone) using the robotâs arms. Module two allows NAO to play customized songs composed by individuals. Module three provides a real-life music therapy experience to the users. We adopted Short-time Fourier Transform and Levenshtein distance to fulfill the design requirements: 1) âmusic detectionâ and 2) âsmart scoring and feedbackâ, which allows NAO to understand music and provide additional practice and oral feedback to the users as applicable. We designed and implemented six Human-Robot-Interaction (HRI) sessions including four intervention sessions. Nine children with ASD and seven Typically Developing participated in a total of fifty HRI experimental sessions. Using our platform, we collected and analyzed data on social behavioral changes and emotion recognition using Electrodermal Activity (EDA) signals. The results of our experiments demonstrate most of the participants were able to complete motor control tasks with 70% accuracy. Six out of the nine ASD participants showed stable turn-taking behavior when playing music. The results of automated emotion classification using Support Vector Machines illustrates that emotional arousal in the ASD group can be detected and well recognized via EDA bio-signals. In summary, the results of our data analyses, including emotion classification using EDA signals, indicate that the proposed robot-music based therapy platform is an attractive and promising assistive tool to facilitate the improvement of fine motor control and turn-taking skills in children with ASD
A Pilot Study on Facial Expression Recognition Ability of Autistic Children Using Ryan, a Rear-Projected Humanoid Robot
Rear-projected robots use computer graphics technology to create facial animations and project them on a mask to show the robotâs facial cues and expressions. These types of robots are becoming commercially available, though more research is required to understand how they can be effectively used as a socially assistive robotic agent. This paper presents the results of a pilot study on comparing the facial expression recognition abilities of children with Autism Spectrum Disorder (ASD) with typically developing (TD) children using a rear-projected humanoid robot called Ryan. Six children with ASD and six TD children participated in this research, where Ryan showed them six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise) with different intensity levels. Participants were asked to identify the expressions portrayed by Ryan. The results of our study show that there is not any general impairment in expression recognition ability of the ASD group comparing to the TD control group; however, both groups showed deficiencies in identifying disgust and fear. Increasing the intensity of Ryanâs facial expressions significantly improved the expression recognition accuracy. Both groups were successful to recognize the expressions demonstrated by Ryan with high average accuracy
Analyzing the Potential of Using Social Robots in Autism Classroom Settings
In recent years, social robots have rapidly advanced alongside the progress of artificial intelligence. Countries around the world have been enacting strategic initiatives that combine robotics and artificial intelligence, leading to an increasing exploration of the application of AI technology in the field of education. In the context of autism intervention, social robots have shown promising results in intervention programs and behavior therapy for children with autism. However, there is a lack of research specifically focusing on the use of social robots in autism classroom settings. Therefore, we have synthesized existing studies and proposed the integration of social robots into autism classrooms. Through the collaboration between robots and teachers, as well as the interaction between robots and students, we aim to enhance the attention of children with autism in the classroom and explore new impacts on their classroom performance, knowledge acquisition, and generalization of after-class skills
Children-Robot Interaction: Eye Gaze Analysis of Children with Autism During Social Interactions
Background:
Typical developing individuals utilize the direction of eye gaze and eye fixation/shifting as crucial elements to transmit socially relevant information (e.g. like, dislike) to others. Individuals with Autism Spectrum Disorder (ASD), deviant pattern of mutual eye gaze is a noticeable feature that may be one of the earliest (detectable) demonstrations of impaired social skills that would lead to other deficits in ASD Individuals (e.g. delaying development of social cognition and affective construal processes). This can significantly affect the quality of humanâs social interactions. Recent studies reveal that children with ASD have superior engagement to the robot-based interaction, and it can effectively trigger positive behaviors (e.g. eye gaze attention). This suggests that interacting with robots may be a promising intervention approach for children with ASD.
Objectives: The main objective of this multidisciplinary research is to utilize humanoid robot technology along with psychological and engineering sciences to better improve the social skills of children with High Functioning Autism (HFA). The designed intervention protocol focuses on different skillsets, such as eye gaze attention, joint attention, facial expression recognition and imitation. The current study is designed to evaluate the eye gaze patterns of children with ASD during verbal communication with a humanoid robot.
Methods: Participants in this study are 13 male children ages 7-17 (M=11 years) diagnosed with ASD. The study employs NAO, an autonomous, programmable humanoid robot from Aldebaran Robotics to interact with ASD children in a series of conversations and interactive games across 3 sessions. During different game segments, NAO and children exchange stories and having conversation on different context. During every session of the game, four cameras which were installed in the video capturing room in addition to the NAOâs front-facing camera record the entire interaction. Videos were later score to analyze the gaze patterns of the children for two different context. Studying eye gaze fixation and eye gaze shifting while: 1) NAO is talking, 2) Kid is talking.
Results: In order to analyze the eye gaze of participants, every frame of video was manually coded as Gaze Averted(â0â) or Gaze At(â1â) w.r.t NAO. To accurately analysis the gaze patterns of children during the conversation, the video segments of âNAO Talkingâ and âKid Talkingâ have been selected. The averages of four measures were employed to report the static and dynamic properties of eye gaze patterns:
1) âNAO talkingâ: Gaze At NAO (GAN)= %55.3, Gaze Shifting (GS) =%3.4, GAN/GS = 34.10, Entropy GS: 0.20
2) âKid talkingâ: GAN = %43.8, GS=%4.2, GAN/GS = 11.6, Entropy GS = 0.27
Conclusions:
The results indicates that the children with ASD having more eye contact and less gaze shifting while NAO is talking (Higher GAN/GS and lower Entropy GS), however they prefer to shift their gaze more often and have less fixation on the robot as they are speaking. These results will serve as an important basis to significantly advance the emerging field of robot-assisted therapy for children with ASD
How Children with Autism Spectrum Disorder Recognize Facial Expressions Displayed by a Rear-Projection Humanoid Robot
Background: Children with Autism Spectrum Disorder (ASD) experience reduced ability to perceive crucial nonverbal communication cues such as eye gaze, gestures, and facial expressions. Recent studies suggest that social robots can be used as effective tools to improve communication and social skills in children with ASD. One explanation has been put forward by several studies that children with ASD feel more contented and motivated in systemized and predictable environment, like interacting with robots.
Objectives: There have been few research studies evaluating how children with ASD perceive facial expression in humanoid robots but no research evaluating facial expression perception on a rear-projected (aka animation-based) facially-expressive humanoid robot, which provide more life-like expressions. This study evaluates how children with high functioning autism (HFA) differ from their typically developing (TD) peers in recognition of facial expressions demonstrated by a life-like rear-projected humanoid robot, which is more adjustable and flexible in terms of displaying facial expressions for further studies.
Methods: Seven HFA and seven TD children and adolescents aged 7-16 participated in this study. The study uses Ryan, a rear-projection, life-like humanoid robot. Six basic emotional facial expressions (happy, sad, angry, disgust, surprised and fear) with four different intensities (25%, 50%, 75% and 100% in ascending order) were shown on Ryanâs face. Participants were asked to choose the expression they perceived among seven options (six basic emotions and none). Responses were recorded by a research assistant. Results were analyzed to obtain the accuracy of facial expression recognition in ASD and TD children on humanoid robot face.
Results: We evaluated the intensity of expression in which participants required to reach the peak accuracy. They were best for happy and angry expressions in which the peak accuracy of 100% was reached with at least 50% of expression intensity. The same peak accuracy was reached for surprised and sad expressions in the intensity of 75% and 100%, respectively. But fear and disgust recognition accuracy never reached above 75%, even in the maximum intensity. The experiment is still in progress for TD children. Results will be compared to a TD sample and implication for intervention and clinical work will be discussed.
Conclusions: Overall, these results show that children with ASD recognize negative expressions such as fear and disgust with a slightly lower accuracy than other expressions. On the other hand, during the test, children showed engagement and excitement toward the robot. Besides, most of the expressions were sufficiently recognizable for children in higher intensities, which means, Ryan, a rear projected life-like robot could be able to successfully communicate with children in terms of facial expression, though more investigations and improvements should be done. These results serve as a basis to advance the promising field of socially assistive robotics for autism therapy
Using Robots as Therapeutic Agents to Teach Children with Autism Recognize Facial Expression
Background: Recognizing and mimicking facial expressions are important cues for building great rapport and relationship in human-human communication. Individuals with Autism Spectrum Disorder (ASD) have often deficits in recognizing and mimicking social cues, such as facial expressions. In the last decade several studies have shown that individuals with ASD have superior engagement toward objects and particularly robots (i.e. humanoid and non-humanoid). However, majority of the studies have focused on investigating robotâs appearances and the engineering design concepts and very few research have been done on the effectiveness of robots in therapeutic and treatment applications. In fact, the critical question that âhow robots can help individuals with autism to practice and learn some social communicational skills and applied them in their daily interactionsâ have not been addressed yet.
Objective: In a multidisciplinary research study we have explored how robot-based therapeutic sessions can be effective and to some extent they can improve the social-experiences of children with ASD. We developed and executed a robot-based multi-session therapeutic protocol which consists of three phases (i.e. baseline, Intervention and human-validation sessions) that can serve as a treatment mechanism for individuals with ASD.
Methods: We recruited seven (2F/5M) children 6-13 years old (Mean=10.14 years), diagnosed with High Functioning Autism (HFA). We employed NAO, an autonomous programmable humanoid robot, to interact with children in a series of social games for several sessions. We captured all the visual and audio communications between NAO and the child using multiple cameras. All the capturing devices were connected to a monitoring system outside of the study room, where a coder observed and annotated the responses of the child online. In every session, NAO asked the child to identify the type of prototypic facial expression (i.e. happy, sad, angry, and neutral) shown on five different photos. In the âbaselineâ sessions we calculated the prior knowledge of every child about the emotion and facial expression concepts. In the âinterventionâ sessions, NAO provides some verbal feedback (if needed), to help the child identify the facial expression. After finishing the intervention sessions, we included two âhuman-validationâ sessions (with no feedback) to evaluate how well the child can apply the learned concepts when a human is replaced with NAO.
Results: The following Table demonstrates the mean and Standard Deviation (STD) of face recognition rates for all subjects in three phases of our study. In our experiment six out of seven subjects had baseline recognition rate lower than 80% and we observed high variation (STD) between different subjects.
Facial Expression Recognition Rate (%)
Baseline
Intervention
Human-Validation
Mean (STD)
69.52 (36.28)
85.83 (20.54)
94.28 (15.11)
Conclusions: The results demonstrate the effectiveness of NAO for teaching and improving facial expression recognition (FER) skills by children with ASD. More specifically, in the baseline, the low FER rate (69.52%) with high variability (STD=36.28) demonstrate that overall, participants had difficulty recognizing expressions. The statistical results of intervention phase, confirms that NAO can teach children recognizing facial expressions reliably (higher accuracy with lower STD). Interestingly, in the human-validation phase children could even recognize the basic facial expressions with a higher accuracy (94%) and very limited variability (STD = 15.11). These results conclude that robot-based feedback and intervention with a customized protocol can improve the learning capabilities and social skills of children with ASD