721 research outputs found

    Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD

    Get PDF
    Neurodevelopmental conditions like Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) impact a significant number of children and adults worldwide. Currently, the means of diagnosing of such conditions is carried out by experts, who employ standard questionnaires and look for certain behavioural markers through manual observation. Such methods are not only subjective, difficult to repeat, and costly but also extremely time consuming. However, with the recent surge of research into automatic facial behaviour analysis and it's varied applications, it could prove to be a potential way of tackling these diagnostic difficulties. Automatic facial expression recognition is one of the core components of this field but it has always been challenging to do it accurately in an unconstrained environment. This thesis presents a dynamic deep learning framework for robust automatic facial expression recognition. It also proposes an approach to apply this method for facial behaviour analysis which can help in the diagnosis of conditions like ADHD and ASD. The proposed facial expression algorithm uses a deep Convolutional Neural Networks (CNN) to learn models of facial Action Units (AU). It attempts to model three main distinguishing features of AUs: shape, appearance and short term dynamics, jointly in a CNN. The appearance is modelled through local image regions relevant to each AU, shape is encoded using binary masks computed from automatically detected facial landmarks and dynamics is encoded by using a short sequence of image as input to CNN. In addition, the method also employs Bi-directional Long Short Memory (BLSTM) recurrent neural networks for modelling long term dynamics. The proposed approach is evaluated on a number of databases showing state-of-the-art performance for both AU detection and intensity estimation tasks. The AU intensities estimated using this approach along with other 3D face tracking data, are used for encoding facial behaviour. The encoded facial behaviour is applied for learning models which can help in detection of ADHD and ASD. This approach was evaluated on the KOMAA database which was specially collected for this purpose. Experimental results show that facial behaviour encoded in this way provide a high discriminative power for classification of people with these conditions. It is shown that the proposed system is a potentially useful, objective and time saving contribution to the clinical diagnosis of ADHD and ASD

    Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD

    Get PDF
    Neurodevelopmental conditions like Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) impact a significant number of children and adults worldwide. Currently, the means of diagnosing of such conditions is carried out by experts, who employ standard questionnaires and look for certain behavioural markers through manual observation. Such methods are not only subjective, difficult to repeat, and costly but also extremely time consuming. However, with the recent surge of research into automatic facial behaviour analysis and it's varied applications, it could prove to be a potential way of tackling these diagnostic difficulties. Automatic facial expression recognition is one of the core components of this field but it has always been challenging to do it accurately in an unconstrained environment. This thesis presents a dynamic deep learning framework for robust automatic facial expression recognition. It also proposes an approach to apply this method for facial behaviour analysis which can help in the diagnosis of conditions like ADHD and ASD. The proposed facial expression algorithm uses a deep Convolutional Neural Networks (CNN) to learn models of facial Action Units (AU). It attempts to model three main distinguishing features of AUs: shape, appearance and short term dynamics, jointly in a CNN. The appearance is modelled through local image regions relevant to each AU, shape is encoded using binary masks computed from automatically detected facial landmarks and dynamics is encoded by using a short sequence of image as input to CNN. In addition, the method also employs Bi-directional Long Short Memory (BLSTM) recurrent neural networks for modelling long term dynamics. The proposed approach is evaluated on a number of databases showing state-of-the-art performance for both AU detection and intensity estimation tasks. The AU intensities estimated using this approach along with other 3D face tracking data, are used for encoding facial behaviour. The encoded facial behaviour is applied for learning models which can help in detection of ADHD and ASD. This approach was evaluated on the KOMAA database which was specially collected for this purpose. Experimental results show that facial behaviour encoded in this way provide a high discriminative power for classification of people with these conditions. It is shown that the proposed system is a potentially useful, objective and time saving contribution to the clinical diagnosis of ADHD and ASD

    Modern Views of Machine Learning for Precision Psychiatry

    Full text link
    In light of the NIMH's Research Domain Criteria (RDoC), the advent of functional neuroimaging, novel technologies and methods provide new opportunities to develop precise and personalized prognosis and diagnosis of mental disorders. Machine learning (ML) and artificial intelligence (AI) technologies are playing an increasingly critical role in the new era of precision psychiatry. Combining ML/AI with neuromodulation technologies can potentially provide explainable solutions in clinical practice and effective therapeutic treatment. Advanced wearable and mobile technologies also call for the new role of ML/AI for digital phenotyping in mobile mental health. In this review, we provide a comprehensive review of the ML methodologies and applications by combining neuroimaging, neuromodulation, and advanced mobile technologies in psychiatry practice. Additionally, we review the role of ML in molecular phenotyping and cross-species biomarker identification in precision psychiatry. We further discuss explainable AI (XAI) and causality testing in a closed-human-in-the-loop manner, and highlight the ML potential in multimedia information extraction and multimodal data fusion. Finally, we discuss conceptual and practical challenges in precision psychiatry and highlight ML opportunities in future research

    Automatic autism spectrum disorder detection using artificial intelligence methods with MRI neuroimaging: A review

    Get PDF
    Autism spectrum disorder (ASD) is a brain condition characterized by diverse signs and symptoms that appear in early childhood. ASD is also associated with communication deficits and repetitive behavior in affected individuals. Various ASD detection methods have been developed, including neuroimaging modalities and psychological tests. Among these methods, magnetic resonance imaging (MRI) imaging modalities are of paramount importance to physicians. Clinicians rely on MRI modalities to diagnose ASD accurately. The MRI modalities are non-invasive methods that include functional (fMRI) and structural (sMRI) neuroimaging methods. However, diagnosing ASD with fMRI and sMRI for specialists is often laborious and time-consuming; therefore, several computer-aided design systems (CADS) based on artificial intelligence (AI) have been developed to assist specialist physicians. Conventional machine learning (ML) and deep learning (DL) are the most popular schemes of AI used for diagnosing ASD. This study aims to review the automated detection of ASD using AI. We review several CADS that have been developed using ML techniques for the automated diagnosis of ASD using MRI modalities. There has been very limited work on the use of DL techniques to develop automated diagnostic models for ASD. A summary of the studies developed using DL is provided in the Supplementary Appendix. Then, the challenges encountered during the automated diagnosis of ASD using MRI and AI techniques are described in detail. Additionally, a graphical comparison of studies using ML and DL to diagnose ASD automatically is discussed. We suggest future approaches to detecting ASDs using AI techniques and MRI neuroimaging.Qatar National Librar

    A social robot connected with chatGPT to improve cognitive functioning in ASD subjects

    Get PDF
    Neurodevelopmental Disorders (NDDs) represent a significant healthcare and economic burden for families and society. Technology, including AI and digital technologies, offers potential solutions for the assessment, monitoring, and treatment of NDDs. However, further research is needed to determine the effectiveness, feasibility, and acceptability of these technologies in NDDs, and to address the challenges associated with their implementation. In this work, we present the application of social robotics using a Pepper robot connected to the OpenAI system (Chat-GPT) for real-time dialogue initiation with the robot. After describing the general architecture of the system, we present two possible simulated interaction scenarios of a subject with Autism Spectrum Disorder in two different situations. Limitations and future implementations are also provided to provide an overview of the potential developments of interconnected systems that could greatly contribute to technological advancements for Neurodevelopmental Disorders (NDD)

    ENGAGEMENT RECOGNITION WITHIN ROBOT-ASSISTED AUTISM THERAPY

    Get PDF
    Autism is a neurodevelopmental condition typically diagnosed in early childhood, which is characterized by challenges in using language and understanding abstract concepts, effective communication, and building social relationships. The utilization of social robots in autism therapy represents a significant area of research. An increasing number of studies explore the use of social robots as mediators between therapists and children diagnosed with autism. Assessing a child’s engagement can enhance the effectiveness of robot-assisted interventions while also providing an objective metric for later analysis. The thesis begins with a comprehensive multiple-session study involving 11 children diagnosed with autism and Attention Deficit Hyperactivity Disorder (ADHD). This study employs multi-purposeful robot activities designed to target various aspects of autism. The study yields both quantitative and qualitative findings based on four behavioural measures that were obtained from video recordings of the sessions. Statistical analysis reveals that adaptive therapy provides a longer engagement duration as compared to non-adaptive therapy sessions. Engagement is a key element in evaluating autism therapy sessions that are needed for acquiring knowledge and practising new skills necessary for social and cognitive development. With the aim to create an engagement recognition model, this research work also involves the manual labelling of collected videos to generate a QAMQOR dataset. This dataset comprises 194 therapy sessions, spanning over 48 hours of video recordings. Additionally, it includes demographic information for 34 children diagnosed with ASD. It is important to note that videos of 23 children with autism were collected from previous records. The QAMQOR dataset was evaluated using standard machine learning and deep learning approaches. However, the development of an accurate engagement recognition model remains challenging due to the unique personal characteristics of each individual with autism. In order to address this challenge and improve recognition accuracy, this PhD work also explores a data-driven model using transfer learning techniques. Our study contributes to addressing the challenges faced by machine learning in recognizing engagement among children with autism, such as diverse engagement activities, multimodal raw data, and the resources and time required for data collection. This research work contributes to the growing field of using social robots in autism therapy by illuminating an understanding of the importance of adaptive therapy and providing valuable insights into engagement recognition. The findings serve as a foundation for further advancements in personalized and effective robot-assisted interventions for individuals with autism

    Investigating the effects of neuromodulatory training on autistic traits: a multi-methods psychophysiological study.

    Get PDF
    Autism spectrum disorder (ASD) is characterized by noticeable difficulties with social interaction and communication. Building on past research in this area and with the aim of improving methodological perspectives, a multi method approach to the study of ASD, mirror neurons and neurofeedback was taken. This thesis is made up of three main experiments: 1) A descriptive study of the resting state electroencephalography (EEG) across the spectrum of autistic traits in neurotypical individuals, 2) A comparison of 3 EEG protocols on MNs activation (mu suppression) and its difference according to self-reported traits of autism in neurotypical individuals, and 3) Neurofeedback training (NFT) on individuals with high autistic traits. In chapters 3 and 4 we employed simultaneous monitoring of physiological data. For chapter 3 EEG and eye-tracking was used, In the case of chapter 4, EEG and eye-tracking as well functional near infrared spectroscopy (fNIRS). Overall the findings revealed differences in mu rhythm reactivity associated to AQ traits. In chapter 2, the rEEG showed that individuals with high AQ scores showed less activation of frontal and fronto-central regions combined with higher levels of complexity in fronto-temporal, temporal, parietal and parieto-occipital areas. In chapter 3, EEG protocols that elicited Mu reactivity in individuals with different AQ traits suggested that as the AQ traits become more pronounced in neurotypical population, the event-related desynchronization (ERD) in low alpha declines. Chapter 3 was also the basis for the choice of pre/post assessment for chapter 4. In chapter 4 the multi-method physiological approach provided parallel physiological evidence for the effects of NFT in sensorimotor reactivity, namely, an increase in ERD in high alpha, higher levels of oxygenated haemoglobin and changes to the amplitude and frequency in the microstructure of mu for participants who underwent active training as opposed to a sham group

    Applications of Robotics for Autism Spectrum Disorder: a Scoping Review

    Get PDF
    Robotic therapies are receiving growing interest in the autism field, especially for the improvement of social skills of children, enhancing traditional human interventions. In this work, we conduct a scoping review of the literature in robotics for autism, providing the largest review on this field from the last five years. Our work underlines the need to better characterize participants and to increase the sample size. It is also important to develop homogeneous training protocols to analyse and compare the results. Nevertheless, 7 out of the 10 Randomized control trials reported a significant impact of robotic therapy. Overall, robot autonomy, adaptability and personalization as well as more standardized outcome measures were pointed as the most critical issues to address in future research

    Computational Methods for Measurement of Visual Attention from Videos towards Large-Scale Behavioral Analysis

    Get PDF
    Visual attention is one of the most important aspects of human social behavior, visual navigation, and interaction with the world, revealing information about their social, cognitive, and affective states. Although monitor-based and wearable eye trackers are widely available, they are not sufficient to support the large-scale collection of naturalistic gaze data in face-to-face social interactions or during interactions with 3D environments. Wearable eye trackers are burdensome to participants and bring issues of calibration, compliance, cost, and battery life. The ability to automatically measure attention from ordinary videos would deliver scalable, dense, and objective measurements to use in practice. This thesis investigates several computational methods to measure visual attention from videos using computer vision and its use for quantifying visual social cues such as eye contact and joint attention. Specifically, three methods are investigated. First, I present methods for detection of looks to camera in first-person view and its use for eye contact detection. Experimental results show that the presented method can achieve the first human expert-level detection of eye contact. Second, I develop a method for tracking heads in a 3d space for measuring attentional shifts. Lastly, I propose spatiotemporal deep neural networks for detecting time-varying attention targets in video and present its application for the detection of shared attention and joint attention. The method achieves state-of-the-art results on different benchmark datasets on attention measurement as well as the first empirical result on clinically-relevant gaze shift classification. Presented approaches have the benefit of linking gaze estimation to the broader tasks of action recognition and dynamic visual scene understanding, and bears potential as a useful tool for understanding attention in various contexts such as human social interactions, skill assessments, and human-robot interactions.Ph.D
    corecore