73 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationUsing eye-tracking technology to capture the visual scanpaths of a sample of laypersons (N = 92), the current study employed a 2 (training condition: ABCDE vs. Ugly Duckling Sign) Ã- 2 (visual condition: photorealistic images vs. illustrations) factorial design to assess whether SSE training succeeds or fails in facilitating increases in sensitivity and specificity. Self-efficacy and perceived importance were tested as moderators, and eye-tracking fixation metrics as mediators, within the framework of Visual Skill Acquisition Theory (VSAT). For sensitivity, results indicated a significant main effect for visual condition, F(1,88) = 7.102, p = .009, wherein illustrations (M = .524, SD = .197) resulted in greater sensitivity than photos (M = .425, SD = .159, d = .55). For specificity, the main effect for training was not significant, F(1,88) = 2.120, p = .149; however, results indicated a significant main effect for visual condition, F(1,88) = 4.079, p = .046, wherein photos (M = .821, SD = .108) resulted in greater specificity than illustrations (M = .770, SD = .137, d = .41). The interaction for training Ã- visual condition, F(1,88) = 3.554, p = .063, was significant within a 90% confidence interval, such that those within the UDS Photo condition displayed greater specificity than all other combinations of training and visual condition. No significant moderated mediation manifested for sensitivity, but for specificity, the model was significant, r = .59, R2 = .34, F(9,82) = 4.7783, p =.001, with Percent of Time in Lookzone serving as a significant mediator, and both self-efficacy and visual condition significantly moderating the mediation. For those in the photo condition with very high self-efficacy, UDS increased specificity directly. For those in the photo condition with self-efficacy levels at the mean or lower, there was a conditional indirect effect through Percent of Time in Lookzoneâ€"which is to say that these individuals spent a larger amount of their viewing time on target (observing the atypical nevi)â€"and time on target is positively related to specificity. Findings suggest that existing SSE training techniques may be enhanced by maximizing visual processing efficiency

    Detecting emotional expressions: Do words help?

    Get PDF

    Computer-aided screening of autism spectrum disorder: Eye-tracking study using data visualization and deep learning

    Get PDF
    Background: The early diagnosis of autism spectrum disorder (ASD) is highly desirable but remains a challenging task, which requires a set of cognitive tests and hours of clinical examinations. In addition, variations of such symptoms exist, which can make the identification of ASD even more difficult. Although diagnosis tests are largely developed by experts, they are still subject to human bias. In this respect, computer-assisted technologies can play a key role in supporting the screening process. Objective: This paper follows on the path of using eye tracking as an integrated part of screening assessment in ASD based on the characteristic elements of the eye gaze. This study adds to the mounting efforts in using eye tracking technology to support the process of ASD screening Methods: The proposed approach basically aims to integrate eye tracking with visualization and machine learning. A group of 59 school-aged participants took part in the study. The participants were invited to watch a set of age-appropriate photographs and videos related to social cognition. Initially, eye-tracking scanpaths were transformed into a visual representation as a set of images. Subsequently, a convolutional neural network was trained to perform the image classification task. Results: The experimental results demonstrated that the visual representation could simplify the diagnostic task and also attained high accuracy. Specifically, the convolutional neural network model could achieve a promising classification accuracy. This largely suggests that visualizations could successfully encode the information of gaze motion and its underlying dynamics. Further, we explored possible correlations between the autism severity and the dynamics of eye movement based on the maximal information coefficient. The findings primarily show that the combination of eye tracking, visualization, and machine learning have strong potential in developing an objective tool to assist in the screening of ASD. Conclusions: Broadly speaking, the approach we propose could be transferable to screening for other disorders, particularly neurodevelopmental disorders

    Analysis of Brain Imaging Data for the Detection of Early Age Autism Spectrum Disorder Using Transfer Learning Approaches for Internet of Things

    Get PDF
    In recent years, advanced magnetic resonance imaging (MRI) methods including functional magnetic resonance imaging (fMRI) and structural magnetic resonance imaging (sMRI) have indicated an increase in the prevalence of neuropsychiatric disorders such as autism spectrum disorder (ASD), effects one out of six children worldwide. Data driven techniques along with medical image analysis techniques, such as computer-assisted diagnosis (CAD), benefiting from deep learning. With the use of artificial intelligence (AI) and IoT-based intelligent approaches, it would be convenient to support autistic children to adopt the new atmospheres. In this paper, we classify and represent learning tasks of the most powerful deep learning network such as convolution neural network (CNN) and transfer learning algorithm on a combination of data from autism brain imaging data exchange (ABIDE I and ABIDE II) datasets. Due to their four-dimensional nature (three spatial dimensions and one temporal dimension), the resting state-fMRI (rs-fMRI) data can be used to develop diagnostic biomarkers for brain dysfunction. ABIDE is a collaboration of global scientists, where ABIDE-I and ABIDE-II consists of 1112 rs-fMRI datasets from 573 typical control (TC) and 539 autism individuals, and 1114 rs-fMRI from 521 autism and 593 typical control individuals respectively, which were collected from 17 different sites. Our proposed optimized version of CNN achieved 81.56% accuracy. This outperforms prior conventional approaches presented only on the ABIDE I datasets

    Investigating the mechanisms underlying fixation durations during the first year of life: a computational account

    Get PDF
    Infants’ eye-movements provide a window onto the development of cognitive functions over the first years of life. Despite considerable advances in the past decade, studying the mechanisms underlying infant fixation duration and saccadic control remains a challenge due to practical and technical constraints in infant testing. This thesis addresses these issues and investigates infant oculomotor control by presenting novel software and methods for dealing with low-quality infant data (GraFIX), a series of behavioural studies involving novel gaze-contingent and sceneviewing paradigms, and computational modelling of fixation timing throughout development. In a cross-sectional study and two longitudinal studies, participants were eye-tracked while viewing dynamic and static complex scenes, and performed gap-overlap and double-step paradigms. Fixation data from these studies were modelled in a number of simulation studies with the CRISP model of fixation durations in adults in scene viewing. Empirical results showed how fixation durations decreased with age for all viewing conditions but at different rates. Individual differences between long- and short-lookers were found across visits and viewing conditions, with static images being the most stable viewing condition. Modelling results confirmed the CRISP theoretical framework’s applicability to infant data and highlighted the influence of both cognitive processing and the developmental state of the visuo-motor system on fixation durations during the first few months of life. More specifically, while the present work suggests that infant fixation durations reflect on-line perceptual and cognitive activity similarly to adults, the individual developmental state of the visuo-motor system still affects this relationship until 10 months of age. Furthermore, results suggested that infants are already able to program saccades in two stages at 3.5 months: (1) an initial labile stage subject to cancellation and (2) a subsequent non-labile stage that cannot be cancelled. The length of the non-labile stage decreased relative to the labile stage especially from 3.5 to 5 months, indicating a greater ability to cancel saccade programs as infants grew older. In summary, the present work provides unprecedented insights into the development of fixation durations and saccadic control during the first year of life and demonstrates the benefits of mixing behavioural and computational approaches to investigate methodologically challenging research topics such as oculomotor control in infancy

    Visual Behavior and Preference Decision-Making in Response to Faces in High-Functioning Autism

    Get PDF
    How do we come to the decision that we like a face? This thesis investigates this important aspect of social processing and communication by examining preference decisions for faces and the role that visual behavior plays in the process. I present a series of studies designed to investigate face preference formation and gaze patterns using eye-tracking and self-reported preference ratings. I tested healthy control subjects and two clinical populations known to have deficits in social processing: people with autism and patients with amygdala lesions. In studies one and two, I explore whether known social cognition deficits in people with autism and amygdala lesions also impair subjective decision-making regarding the attractiveness of faces. In study three, I investigate the flexibility of rule-based visual strategies used by these populations during face perception. Additionally, I present a custom algorithm developed to process raw eyetracking data, which was used to analyze all eyetracking data in this thesis. People with autism and patients with amygdala lesions are known to have general deficits in social processing, including difficulty orienting toward and evaluating faces. Nevertheless, I find that their behavior is markedly similar in many areas where we would expect them to have abnormalities or deficiencies. Their preference decisions when judging facial attractiveness were highly correlated with those made by controls, and both groups showed the same biases for familiar faces over novel faces. In addition, people with autism exhibit the same visual sampling behavior linking preference and attentional orienting, but reach their decisions faster than controls and also appear insensitive to the difficulty of the choice. Finally, gaze to the eye region appears normal in the absence of an explicit decision-making task, but only when analyzed in a similar manner as previous studies. However, when face sub-regions were analyzed in greater detail, people with autism demonstrate abnormalities in face gaze patterns, failing to emphasize the most information-rich regions of the face. Furthermore, people with autism demonstrate impairments in their ability to update those gaze patterns to accommodate different viewing restrictions. Taken together, these findings support the idea that the normal formation of face preferences can be preserved in the presence of general social processing impairments. Patterns in the eyetracking and behavioral data indicate that this is made possible, in part, by compensatory atypical processing and visual strategies.</p

    The computational neurology of active vision

    Get PDF
    In this thesis, we appeal to recent developments in theoretical neurobiology – namely, active inference – to understand the active visual system and its disorders. Chapter 1 reviews the neurobiology of active vision. This introduces some of the key conceptual themes around attention and inference that recur through subsequent chapters. Chapter 2 provides a technical overview of active inference, and its interpretation in terms of message passing between populations of neurons. Chapter 3 applies the material in Chapter 2 to provide a computational characterisation of the oculomotor system. This deals with two key challenges in active vision: deciding where to look, and working out how to look there. The homology between this message passing and the brain networks solving these inference problems provide a basis for in silico lesion experiments, and an account of the aberrant neural computations that give rise to clinical oculomotor signs (including internuclear ophthalmoplegia). Chapter 4 picks up on the role of uncertainty resolution in deciding where to look, and examines the role of beliefs about the quality (or precision) of data in perceptual inference. We illustrate how abnormal prior beliefs influence inferences about uncertainty and give rise to neuromodulatory changes and visual hallucinatory phenomena (of the sort associated with synucleinopathies). We then demonstrate how synthetic pharmacological perturbations that alter these neuromodulatory systems give rise to the oculomotor changes associated with drugs acting upon these systems. Chapter 5 develops a model of visual neglect, using an oculomotor version of a line cancellation task. We then test a prediction of this model using magnetoencephalography and dynamic causal modelling. Chapter 6 concludes by situating the work in this thesis in the context of computational neurology. This illustrates how the variational principles used here to characterise the active visual system may be generalised to other sensorimotor systems and their disorders

    The distracted robot: what happens when artificial agents behave like us

    Get PDF
    In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits. A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents. Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently
    • …
    corecore