1,156 research outputs found

    Integrating a Non-Uniformly Sampled Software Retina with a Deep CNN Model

    Get PDF
    We present a biologically inspired method for pre-processing images applied to CNNs that reduces their memory requirements while increasing their invariance to scale and rotation changes. Our method is based on the mammalian retino-cortical transform: a mapping between a pseudo-randomly tessellated retina model (used to sample an input image) and a CNN. The aim of this first pilot study is to demonstrate a functional retinaintegrated CNN implementation and this produced the following results: a network using the full retino-cortical transform yielded an F1 score of 0.80 on a test set during a 4-way classification task, while an identical network not using the proposed method yielded an F1 score of 0.86 on the same task. The method reduced the visual data by e×7, the input data to the CNN by 40% and the number of CNN training epochs by 64%. These results demonstrate the viability of our method and hint at the potential of exploiting functional traits of natural vision systems in CNNs

    A Deep Learning Approach to Understanding Real-World Scene Perception in Autism

    Get PDF
    Autism is a multifaceted neurodevelopmental condition. Around 90% of individuals with autism experience sensory sensitivities, particularly impacting visual perception. Despite this high percentage, previous studies investigating visual perception in autism impose severe limitations on our understanding. In many of these experiments, their stimuli and experimental methods are un-naturalistic and produce unreproducible and conflicting results. In this study, we investigate the nature of the real-world visual experience in autism with a cutting-edge experimental approach. First, we use virtual reality headsets with eye-trackers to measure gaze behavior while individuals freely explore real-world, everyday scenes. Then, we compare their gaze behavior to the representations within convolutional neural networks (CNNs), a class of computational models resemblant of the primate visual system. This allows us to model the stages of the visual processing hierarchy that could account for differences in visual processing between individuals with and without autism. To our knowledge, this is the first fully unbiased, data-driven approach to studying naturalistic visual behavior in autism. In brief, we found that convolutional neural networks, regardless of the task upon which they were trained, are better able to predict gaze behavior in typically developing controls than in individuals with autism. This suggests that differences in gaze behavior between the two groups are not principally driven by the semantically-meaningful features within a scene and emerge from differences earlier in visual processing
    • …
    corecore