6 research outputs found

    Analysis of eye gaze pattern of infants at risk of autism spectrum disorder using Markov models

    No full text
    This paper presents the possibility of using pattern recognition algorithms of infant gaze patterns at six months of age among children at high risk for an autism spectrum disorder (ASD). ASDs, which must be diagnosed by 3 years of age, are characterized by communication and interaction impairments which frequently involve disturbances of visual attention and gaze patterning. We used video cameras to record the face-to-face interactions of 32 infant subjects with their parents. The video was manually coded to determine the eye gaze pattern of infants by marking where the infant was looking in each frame (either at their parent's face or away from their parent's face). In order to identify infants ASD diagnosis at three years, we analyzed infant eye gaze patterns at six months. Variable-order Markov Models (VMM) were used to create models for typically developing comparison children as well as children with an ASD. The models correctly classified infants who did and did not develop an ASD diagnosis with an accuracy rate of 93.75 percent. Employing an assessment tool at a very young age offers the hope of early intervention, potentially mitigating the effects of the disorder throughout the rest of the child's life

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Multi-View Digital Representation Of Social Behaviours In Children And Action Recognition Methods

    Get PDF
    Autism spectrum disorders (ASD) affect at least 1% of children globally. It is partially defined by social behaviour delays in eye contact and joint attention during social interaction and with evidence of reduced heart rate variability (HRV) under static and social stress environments. Currently, no validated artificial intelligence or signal processing algorithms are available to objectively quantify behavioural and physiological markers in unrestricted interactive play environments to assist in the diagnosis of ASD. This thesis proposes that social behavioural and physiological markers of children with ASD can be objectively quantified through a synergistic digital approach from multi-modal and multi-view data sources. First, a novel deep learning (DL) framework for social behaviour recognition using a fusion of multi-view and multi-modal predictions is proposed. It utilises true-colour images and moving trajectory (optical flow) images extracted from fixed camera video recordings to detect eye contact between children and caregivers in free play while elucidating unique digital features of eye contact behaviour in multiple individual social interaction settings. Moreover, for the first time, a support vector machine model with feature selection is implemented along with statistical analysis, to identify effective facial features and facial orientations for use in identifying ASD during joint attention episodes in free play. Furthermore, a customised NeuroKit2 toolbox was validated using the opensource QT database and a clinical baseline social interaction task. This toolbox facilitates the automated extraction of HRV metrics and allows between-group comparisons in physiological markers. The work highlights the importance of developing explainable algorithms that objectively quantifying multi-modal digital markers. It offers the potential for the use of digitalised phenotypes to aid in the assessment of ASD and intervention in naturalistic social interaction
    corecore