742 research outputs found

    Multimodal annotation tool for challenging behaviors in people with Autism spectrum disorders

    Get PDF
    Individuals diagnosed with Autism Spectrum Disorders (ASD) often have challenging behaviors (CB's), such as self-injury or emotional outbursts, which can negatively impact the quality of life of themselves and those around them. Recent advances in mobile and ubiquitous technologies provide an opportunity to efficiently and accurately capture important information preceding and associated with these CB's. The ability to obtain this type of data will help with both intervention and behavioral phenotyping efforts. Through collaboration with behavioral scientists and therapists, we identified relevant design requirements and created an easy-to-use mobile application for collecting, labeling, and sharing in-situ behavior data in individuals diagnosed with ASD. Furthermore, we have released the application to the community as an open-source project so it can be validated and extended by other researchers.National Science Foundation (U.S.) (Grant NSF CCF-1029585)MIT Media Lab ConsortiumAutism Speaks (Organization) (Innovative Technology for Autism Initiative Grant

    Affective Computing in the Area of Autism

    Get PDF
    The prevalence rate of Autism Spectrum Disorders (ASD) is increasing at an alarming rate (1 in 68 children). With this increase comes the need of early diagnosis of ASD, timely intervention, and understanding the conditions that could be comorbid to ASD. Understanding co-morbid anxiety and its interaction with emotion comprehension and production in ASD is a growing and multifaceted area of research. Recognizing and producing contingent emotional expressions is a complex task, which is even more difficult for individuals with ASD. First, I investigate the arousal experienced by adolescents with ASD in a group therapy setting. In this study I identify the instances in which the physiological arousal is experienced by adolescents with ASD ( have-it ), see if the facial expressions of these adolescents indicate their arousal ( show-it ), and determine if the adolescents are self-aware of this arousal or not ( know-it ). In order to establish a relationship across these three components of emotion expression and recognition, a multi-modal approach for data collection is utilized. Machine learning techniques are used to determine whether still video images of facial expressions could be used to predict Electrodermal Activity (EDA) data. Implications for the understanding of emotion and social communication difficulties in ASD, as well as future targets for intervention, are discussed. Second, it is hypothesized that a well-designed intervention technique helps in the overall development of children with ASD by improving their level of functioning. I designed and validated a mobile-based intervention designed for teaching social skills to children with ASD. I also evaluated the social skill intervention. Last, I present the research goals behind an mHealth-based screening tool for early diagnosis of ASD in toddlers. The design purpose of this tool is to help people from low-income group, who have limited access to resources. This goal is achieved without burdening the physicians, their staff, and the insurance companies

    Continuous Analysis of Affect from Voice and Face

    Get PDF
    Human affective behavior is multimodal, continuous and complex. Despite major advances within the affective computing research field, modeling, analyzing, interpreting and responding to human affective behavior still remains a challenge for automated systems as affect and emotions are complex constructs, with fuzzy boundaries and with substantial individual differences in expression and experience [7]. Therefore, affective and behavioral computing researchers have recently invested increased effort in exploring how to best model, analyze and interpret the subtlety, complexity and continuity (represented along a continuum e.g., from −1 to +1) of affective behavior in terms of latent dimensions (e.g., arousal, power and valence) and appraisals, rather than in terms of a small number of discrete emotion categories (e.g., happiness and sadness). This chapter aims to (i) give a brief overview of the existing efforts and the major accomplishments in modeling and analysis of emotional expressions in dimensional and continuous space while focusing on open issues and new challenges in the field, and (ii) introduce a representative approach for multimodal continuous analysis of affect from voice and face, and provide experimental results using the audiovisual Sensitive Artificial Listener (SAL) Database of natural interactions. The chapter concludes by posing a number of questions that highlight the significant issues in the field, and by extracting potential answers to these questions from the relevant literature. The chapter is organized as follows. Section 10.2 describes theories of emotion, Sect. 10.3 provides details on the affect dimensions employed in the literature as well as how emotions are perceived from visual, audio and physiological modalities. Section 10.4 summarizes how current technology has been developed, in terms of data acquisition and annotation, and automatic analysis of affect in continuous space by bringing forth a number of issues that need to be taken into account when applying a dimensional approach to emotion recognition, namely, determining the duration of emotions for automatic analysis, modeling the intensity of emotions, determining the baseline, dealing with high inter-subject expression variation, defining optimal strategies for fusion of multiple cues and modalities, and identifying appropriate machine learning techniques and evaluation measures. Section 10.5 presents our representative system that fuses vocal and facial expression cues for dimensional and continuous prediction of emotions in valence and arousal space by employing the bidirectional Long Short-Term Memory neural networks (BLSTM-NN), and introduces an output-associative fusion framework that incorporates correlations between the emotion dimensions to further improve continuous affect prediction. Section 10.6 concludes the chapter

    Xylo-Bot: A Therapeutic Robot-Based Music Platform for Children with Autism

    Get PDF
    Children with Autism Spectrum Disorder (ASD) experience deficits in verbal and nonverbal communication skills, including motor control, emotional facial expressions, and eye gaze / joint attention. This Ph.D. dissertation focuses on studying the feasibility and effectiveness of using a social robot, called NAO, and a toy music instrument, xylophone, at modeling and improving the social responses and behaviors of children with ASD. In our investigation, we designed an autonomous social interactive music teaching system to fulfill this mission. A novel modular robot-music teaching system consisting of three modules is presented. Module 1 provides an autonomous self-awareness positioning system for the robot to localize the instrument and make a micro adjustment for the arm joints to play the note bars properly. Module 2 allows the robot to be able to play any customized song per user’s request. This design provides an opportunity to translate songs into C-major or a-minor with a set of hexadecimal numbers without music experience. After the music score converted robot should be able to play it immediately. Module 3 is designed for providing real-life music teaching experience for the users. Two key features of this module are a) music detection and b) smart scoring and feedback . Short-time Fourier transform and Levenshtein distance are adapted to fulfill the design requirements, which allow the robot to understand music and provide a proper dosage of practice and oral feedback to users. A new instrument has designed to present better emotions from music due to the limitation of the original xylophone. This new programmable xylophone can provide a more extensive frequency range of notes, easily switch between the Major and Minor keys, extensively easy to control, and have fun with it as an advanced music instrument. Because our initial intention has been to study emotion in children with autism, an automated method for emotion classification in children using electrodermal activity (EDA) signals. The time-frequency analysis of the acquired raw EDAs provides a feature space based on which different emotions can be recognized. To this end, the complex Morlet (C-Morlet) wavelet function is applied to the recorded EDA signals. The dataset used in this research includes a set of multimodal recordings of social and communicative behavior as well as EDA recordings of 100 children younger than 30 months old. The dataset is annotated by two experts to extract the time sequence corresponding to three primary emotions, including “Joy”, “Boredom”, and “Acceptance”. Various experiments are conducted on the annotated EDA signals to classify emotions using a support vector machine (SVM) classifier. The quantitative results show that emotion classification performance remarkably improves compared to other methods when the proposed wavelet-based features are used. By using this emotion classification, emotion engagement during sessions, and feelings between different music can be detected after data analysis. NAO music education platform will be thought-about as a decent tool to facilitate improving fine motor control, turn-taking skills, and social activities engagement. Most of the ASD youngsters began to develop the strike movement within the two initial intervention sessions; some even mastered the motor ability throughout the early events. More than half of the subjects could dominate proper turn-taking after few sessions. Music teaching is a good example for accomplishing social skill tasks by taking advantage of customized songs selected by individuals. According to researcher and video annotator, majority of the subjects showed high level of engagement for all music game activities, especially with the free play mode. Based on the conversation and music performance with NAO, subjects showed strong interest in challenging the robot with a friendly way

    Patient centric intervention for children with high functioning autism spectrum disorder. Can ICT solutions improve the state of the art ?

    Get PDF
    In my PhD research we developed an integrated technological platform for the acquisition of neurophysiologic signals in a semi-naturalistic setting where children are free to move around, play with different objects and interact with the examiner. The interaction with the examiner rather than with a screen is another very important feature of the present research, and allows recreating a more real situation with social interactions and cues. In this paradigm, we can assume that the signals acquired from the brain and the autonomic system, are much more similar to what is generated while the child interacts in common life situations. This setting, with a relatively simple technical implementation, can be considered as one step towards a more behaviorally driven analysis of neurophysiologic activity. Within the context of a pilot open trial, we showed the feasibility of the technological platform applied to the classical intervention solutions for the autism. We found that (1) the platform was useful during both children-therapist interaction at hospital as well as children-parents interaction at home, (2) tailored intervention was compatible with at home use and non-professional therapist/parents. Going back to the title of my thesis: 'Can ICT solution improve the state-of-the-art ?' the answer could be: 'Yes it can be an useful support for a skilled professional in the field of autis

    From Robot-Assisted Intervention to New Generation of Autism Screening: an Engineering Implementation Beyond the Technical Approach

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder that affects people from birth, whose symptoms are found in the early developmental period. The ASD diagnosis is usually performed through several sessions of behavioral observation, exhaustive screening, and manual coding behavior. The early detection of ASD signs in naturalistic behavioral observation may be improved through Social Assistive Robotics (SAR) and technological-based tools for an automated behavior assessment. Robot-assisted tools using Child-Robot Interaction (CRI) theories have been of interest in intervention for children with Autism Spectrum Disorder (CwASD), elucidating faster and more significant gains from the diagnosis and therapeutic intervention when compared with classical methods. Additionally, using computer vision to analyze the childs behaviors and automated video coding to summarize the responses would help clinicians to reduce the delay of ASD diagnosis. Despite the increment of researches related to SAR, achieving a plausible Robot-Assisted Diagnosis (RAD) for CwASD remains a considerable challenge to the clinical and robotics community. The work of specialists regarding ASD diagnosis is hard and labor-intensive, as the conditions manifestations are inherently heterogeneous and make the process more difficult. In addition, the aforementioned complexity may be the main reason for the slow progress in the development of SAR with diagnostic purpose. Also, there still is a lack of guidelines on how to select the appropriate robotic features, such as appearance, morphology, autonomy level, and how to design and implement the robots role in the CRI. Thus, this Ph.D. Thesis provides a comprehensive Robot-Assisted intervention for CwASD to assess autism risk factors for an autism diagnostic purpose. More specifically, two studies were conducted to analyze and validate the system performance. Through statistical data analysis, different behavior pattern of the CwASD group were identified, which suggest that these patterns can be used to detect autism risk factors through robot-based interventions. To increase the scope of this research, a theoretical conceptualization of the pervasive version of the multimodal environment was described as well as a participatory design methodology was designed and implemented on the Colombian autism community, providing, a set of guidelines regarding the design of a social robot-device suitable to be applied for robot-assisted intervention for CwASD

    Measuring Engagement in Robot-Assisted Autism Therapy: A Cross-Cultural Study

    Get PDF
    During occupational therapy for children with autism, it is often necessary to elicit and maintain engagement for the children to benefit from the session. Recently, social robots have been used for this; however, existing robots lack the ability to autonomously recognize the children’s level of engagement, which is necessary when choosing an optimal interaction strategy. Progress in automated engagement reading has been impeded in part due to a lack of studies on child-robot engagement in autism therapy. While it is well known that there are large individual differences in autism, little is known about how these vary across cultures. To this end, we analyzed the engagement of children (age 3–13) from two different cultural backgrounds: Asia (Japan, n = 17) and Eastern Europe (Serbia, n = 19). The children participated in a 25 min therapy session during which we studied the relationship between the children’s behavioral engagement (task-driven) and different facets of affective engagement (valence and arousal). Although our results indicate that there are statistically significant differences in engagement displays in the two groups, it is difficult to make any causal claims about these differences due to the large variation in age and behavioral severity of the children in the study. However, our exploratory analysis reveals important associations between target engagement and perceived levels of valence and arousal, indicating that these can be used as a proxy for the children’s engagement during the therapy. We provide suggestions on how this can be leveraged to optimize social robots for autism therapy, while taking into account cultural differences.MEXT Grant-in-Aid for Young Scientists B (grant no. 16763279)Chubu University Grant I (grant no. 27IS04I (Japan))European Union. HORIZON 2020 (grant agreement no. 701236 (ENGAGEME))European Commission. Framework Programme for Research and Innovation. Marie Sklodowska-Curie Actions (Individual Fellowship)European Commission. Framework Programme for Research and Innovation. Marie Sklodowska-Curie Actions (grant agreement no. 688835 (DE-ENIGMA)
    corecore