976 research outputs found

    Affective Computing in the Area of Autism

    Get PDF
    The prevalence rate of Autism Spectrum Disorders (ASD) is increasing at an alarming rate (1 in 68 children). With this increase comes the need of early diagnosis of ASD, timely intervention, and understanding the conditions that could be comorbid to ASD. Understanding co-morbid anxiety and its interaction with emotion comprehension and production in ASD is a growing and multifaceted area of research. Recognizing and producing contingent emotional expressions is a complex task, which is even more difficult for individuals with ASD. First, I investigate the arousal experienced by adolescents with ASD in a group therapy setting. In this study I identify the instances in which the physiological arousal is experienced by adolescents with ASD ( have-it ), see if the facial expressions of these adolescents indicate their arousal ( show-it ), and determine if the adolescents are self-aware of this arousal or not ( know-it ). In order to establish a relationship across these three components of emotion expression and recognition, a multi-modal approach for data collection is utilized. Machine learning techniques are used to determine whether still video images of facial expressions could be used to predict Electrodermal Activity (EDA) data. Implications for the understanding of emotion and social communication difficulties in ASD, as well as future targets for intervention, are discussed. Second, it is hypothesized that a well-designed intervention technique helps in the overall development of children with ASD by improving their level of functioning. I designed and validated a mobile-based intervention designed for teaching social skills to children with ASD. I also evaluated the social skill intervention. Last, I present the research goals behind an mHealth-based screening tool for early diagnosis of ASD in toddlers. The design purpose of this tool is to help people from low-income group, who have limited access to resources. This goal is achieved without burdening the physicians, their staff, and the insurance companies

    Sensing technologies and machine learning methods for emotion recognition in autism: Systematic review

    Get PDF
    Background: Human Emotion Recognition (HER) has been a popular field of study in the past years. Despite the great progresses made so far, relatively little attention has been paid to the use of HER in autism. People with autism are known to face problems with daily social communication and the prototypical interpretation of emotional responses, which are most frequently exerted via facial expressions. This poses significant practical challenges to the application of regular HER systems, which are normally developed for and by neurotypical people. Objective: This study reviews the literature on the use of HER systems in autism, particularly with respect to sensing technologies and machine learning methods, as to identify existing barriers and possible future directions. Methods: We conducted a systematic review of articles published between January 2011 and June 2023 according to the 2020 PRISMA guidelines. Manuscripts were identified through searching Web of Science and Scopus databases. Manuscripts were included when related to emotion recognition, used sensors and machine learning techniques, and involved children with autism, young, or adults. Results: The search yielded 346 articles. A total of 65 publications met the eligibility criteria and were included in the review. Conclusions: Studies predominantly used facial expression techniques as the emotion recognition method. Consequently, video cameras were the most widely used devices across studies, although a growing trend in the use of physiological sensors was observed lately. Happiness, sadness, anger, fear, disgust, and surprise were most frequently addressed. Classical supervised machine learning techniques were primarily used at the expense of unsupervised approaches or more recent deep learning models. Studies focused on autism in a broad sense but limited efforts have been directed towards more specific disorders of the spectrum. Privacy or security issues were seldom addressed, and if so, at a rather insufficient level of detail.This research has been partially funded by the Spanish project “Advanced Computing Architectures and Machine Learning-Based Solutions for Complex Problems in Bioinformatics, Biotechnology, and Biomedicine (RTI2018-101674-B-I00)” and the Andalusian project “Integration of heterogeneous biomedical information sources by means of high performance computing. Application to personalized and precision medicine (P20_00163)”. Funding for this research is provided by the EU Horizon 2020 Pharaon project ‘Pilots for Healthy and Active Ageing’ (no. 857188). Moreover, this research has received funding under the REMIND project Marie Sklodowska-Curie EU Framework for Research and Innovation Horizon 2020 (no. 734355). This research has been partially funded by the BALLADEER project (PROMETEO/2021/088) from the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana. Furthermore, it has been partially funded by the AETHER-UA (PID2020-112540RB-C43) project from the Spanish Ministry of Science and Innovation. This work has been also partially funded by “La Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital”, under the project “Development of an architecture based on machine learning and data mining techniques for the prediction of indicators in the diagnosis and intervention of autism spectrum disorder. AICO/2020/117”. This study was also funded by the Colombian Government through Minciencias grant number 860 “international studies for doctorate”. This research has been partially funded by the Spanish Government by the project PID2021-127275OB-I00, FEDER “Una manera de hacer Europa”. Moreover, this contribution has been supported by the Spanish Institute of Health ISCIII through the DTS21-00047 project. Furthermore, this work was funded by COST Actions “HARMONISATION” (CA20122) and “A Comprehensive Network Against Brain Cancer” (Net4Brain - CA22103). Sandra Amador is granted by the Generalitat Valenciana and the European Social Fund (CIACIF/ 2022/233)

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD

    Facial and Bodily Expressions for Control and Adaptation of Games (ECAG 2008)

    Get PDF

    The role of autistic traits in the perception of emotion from faces and voices: a behavioural and fMRI investigation

    Get PDF
    This thesis combined behavioural and fMRI approaches in the study of the role of autistic traits in the perception of emotion from faces and voices, addressing research questions concerning: behavioural recognition of a full range of six basic emotions across multiple domains (face, voice, and face-voice); neural correlates during the processing of a wide range of emotional expressions from the face, the voice and the combination of both; neural circuity in responding to an incongruence effect (incongruence vs. congruence). The behavioural study investigated the effects of autistic traits as quantified by the Autism- Spectrum Quotient (AQ) on emotional processing in forms of unimodal (faces, voices) and crossmodal (emotionally congruent face-voice expressions) presentations. In addition, by taking into account the degree of anxiety, the role of co-morbid anxiety on emotion recognition in autistic traits was also explored. Compared to an age and gender-matched group of individuals with low levels of autistic traits (LAQ), a trend of no general deficit was found in individuals with high levels of autistic traits (HAQ) in recognizing emotions presented in faces and voice, regardless of their co-morbid anxiety. However, co-morbid anxiety did moderate the relationship between autistic traits and the recognition of emotions (e.g., fear, surprise, and anger), and this effect tended to be different for the two groups. Specifically, with greater anxiety, individuals with HAQ were found to show less probility of correct response in recognizing the emotion of fear. In contrast, individuals with LAQ showed greater probability of correct response in recognizing fear expressions. For response time, anxiety symptoms tended to be significantly associated with greater response latency in the HAQ group but less response latency in the LAQ group in the recognition of emotional expressions, negative emotions in particular (e.g., anger, fear, and sadness); and this effect of anxiety was not restricted to specific modalities. Despite the absence of finding a general emotion recognition deficit in individuals with considerable autistic traits compared to those with low levels of autistic traits, it did not necessarily mean that these two groups shared same neural network when processing emotions. Therefore, it was useful to explore the neural correlates engaged in processing of emotional expressions in individuals with high levels of autistic traits. Results of this investigation tended to suggest a hypo activation of brain areas dedicated to multimodal integration, particularly for displays showing happiness and disgust. However, both the HAQ group and LAQ group showed similar patterns of brain response (mainly in temporal regions) in response to face-voice combination. In response to emotional stimuli in single modality, the HAQ group activated a number of frontal and temporal regions (e.g., STG, MFG, IFG); these differences may suggested a more effortful and less automatic processing in individual with HAQ. In everyday life, emotional information is often conveyed by both the face and voice. Consequently, concurrently presented information by one source can alter the way that information from the other source is perceived and leads to emotional incongruence if information from the two sources was incongruent. Using fMRI, the present work also examined the neural circuity involved in responding to an incongruence effect (incongruence vs. congruence) from face-voice pairs in a group of individuals with considerable autistic traits. In addition, the differences in brain responses for emotional incongruity between explicit instructions to attend to facial expression and explicit instructions to attend to tone of voice in autistic traits was also explored. It was found that there was no significant incongruence effect between groups, given that individuals with a high level of autistic traits are able to recruit more normative neural networks for processing incongruence as individuals with a low level of autistic traits, regardless of instructions. Though no between group differences, individuals with HAQ showed negative activation in regions involved in the default- mode network. However, taken into account changes of instructions, a stronger incongruence effect was more likely to be occurred in the voice-attend condition for individuals with HAQ while in the face-attend condition for individuals with LAQ
    • 

    corecore