37 research outputs found

    Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision.

    Get PDF
    Deep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain or model. Here we show: (1) Recurrent convolutional neural network models outperform feedforward convolutional models matched in their number of parameters in large-scale visual recognition tasks on natural images. (2) Setting a confidence threshold, at which recurrent computations terminate and a decision is made, enables flexible trading of speed for accuracy. At a given confidence threshold, the model expends more time and energy on images that are harder to recognise, without requiring additional parameters for deeper computations. (3) The recurrent model's reaction time for an image predicts the human reaction time for the same image better than several parameter-matched and state-of-the-art feedforward models. (4) Across confidence thresholds, the recurrent model emulates the behaviour of feedforward control models in that it achieves the same accuracy at approximately the same computational cost (mean number of floating-point operations). However, the recurrent model can be run longer (higher confidence threshold) and then outperforms parameter-matched feedforward comparison models. These results suggest that recurrent connectivity, a hallmark of biological visual systems, may be essential for understanding the accuracy, flexibility, and dynamics of human visual recognition

    Recurrence is required to capture the representational dynamics of the human visual system.

    Get PDF
    The human visual system is an intricate network of brain regions that enables us to recognize the world around us. Despite its abundant lateral and feedback connections, object processing is commonly viewed and studied as a feedforward process. Here, we measure and model the rapid representational dynamics across multiple stages of the human ventral stream using time-resolved brain imaging and deep learning. We observe substantial representational transformations during the first 300 ms of processing within and across ventral-stream regions. Categorical divisions emerge in sequence, cascading forward and in reverse across regions, and Granger causality analysis suggests bidirectional information flow between regions. Finally, recurrent deep neural network models clearly outperform parameter-matched feedforward models in terms of their ability to capture the multiregion cortical dynamics. Targeted virtual cooling experiments on the recurrent deep network models further substantiate the importance of their lateral and top-down connections. These results establish that recurrent models are required to understand information processing in the human ventral stream

    Individual differences among deep neural network models.

    Get PDF
    Deep neural networks (DNNs) excel at visual recognition tasks and are increasingly used as a modeling framework for neural computations in the primate brain. Just like individual brains, each DNN has a unique connectivity and representational profile. Here, we investigate individual differences among DNN instances that arise from varying only the random initialization of the network weights. Using tools typically employed in systems neuroscience, we show that this minimal change in initial conditions prior to training leads to substantial differences in intermediate and higher-level network representations despite similar network-level classification performance. We locate the origins of the effects in an under-constrained alignment of category exemplars, rather than misaligned category centroids. These results call into question the common practice of using single networks to derive insights into neural information processing and rather suggest that computational neuroscientists working with DNNs may need to base their inferences on groups of multiple network instances

    Eye movements as a window to cognitive processes

    Get PDF
    Eye movement research is a highly active and productive research field. Here we focus on how the embodied nature of eye movements can act as a window to the brain and the mind. In particular, we discuss how conscious perception depends on the trajectory of fixated locations and consequently address how fixation locations are selected. Specifically, we argue that the selection of fixation points during visual exploration can be understood to a large degree based on retinotopically structured models. Yet, these models largely ignore spatiotemporal structure in eye-movement sequences. Explaining spatiotemporal structure in eye-movement trajectories requires an understanding of spatiotemporal properties of the visual sampling process. With this in mind, we discuss the availability of external information to internal inference about causes in the world. We demonstrate that visual foraging is a dynamic process that can be systematically modulated either towards exploration or exploitation. For an analysis at high temporal resolution, we suggest a new method: The renewal density allows the investigation of precise temporal relation of eye movements and other actions like a button press. We conclude with an outlook and propose that eye movement research has reached an appropriate stage and can easily be combined with other research methods to utilize this window to the brain and mind to its fullest

    The genetic organization of longitudinal subcortical volumetric change is stable throughout the lifespan.

    Get PDF
    Development and aging of the cerebral cortex show similar topographic organization and are governed by the same genes. It is unclear whether the same is true for subcortical regions, which follow fundamentally different ontogenetic and phylogenetic principles. We tested the hypothesis that genetically governed neurodevelopmental processes can be traced throughout life by assessing to which degree brain regions that develop together continue to change together through life. Analyzing over 6000 longitudinal MRIs of the brain, we used graph theory to identify five clusters of coordinated development, indexed as patterns of correlated volumetric change in brain structures. The clusters tended to follow placement along the cranial axis in embryonic brain development, suggesting continuity from prenatal stages, and correlated with cognition. Across independent longitudinal datasets, we demonstrated that developmental clusters were conserved through life. Twin-based genetic correlations revealed distinct sets of genes governing change in each cluster. Single-nucleotide polymorphisms-based analyses of 38,127 cross-sectional MRIs showed a similar pattern of genetic volume-volume correlations. In conclusion, coordination of subcortical change adheres to fundamental principles of lifespan continuity and genetic organization

    Self-reported sleep relates to hippocampal atrophy across the adult lifespan: results from the Lifebrain consortium.

    Get PDF
    OBJECTIVES: Poor sleep is associated with multiple age-related neurodegenerative and neuropsychiatric conditions. The hippocampus plays a special role in sleep and sleep-dependent cognition, and accelerated hippocampal atrophy is typically seen with higher age. Hence, it is critical to establish how the relationship between sleep and hippocampal volume loss unfolds across the adult lifespan. METHODS: Self-reported sleep measures and MRI-derived hippocampal volumes were obtained from 3105 cognitively normal participants (18-90 years) from major European brain studies in the Lifebrain consortium. Hippocampal volume change was estimated from 5116 MRIs from 1299 participants for whom longitudinal MRIs were available, followed up to 11 years with a mean interval of 3.3 years. Cross-sectional analyses were repeated in a sample of 21,390 participants from the UK Biobank. RESULTS: No cross-sectional sleep-hippocampal volume relationships were found. However, worse sleep quality, efficiency, problems, and daytime tiredness were related to greater hippocampal volume loss over time, with high scorers showing 0.22% greater annual loss than low scorers. The relationship between sleep and hippocampal atrophy did not vary across age. Simulations showed that the observed longitudinal effects were too small to be detected as age-interactions in the cross-sectional analyses. CONCLUSIONS: Worse self-reported sleep is associated with higher rates of hippocampal volume decline across the adult lifespan. This suggests that sleep is relevant to understand individual differences in hippocampal atrophy, but limited effect sizes call for cautious interpretation

    Measures and Limits of Models of Fixation Selection

    Get PDF
    Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection . We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced

    Overt Visual Attention as a Causal Factor of Perceptual Awareness

    Get PDF
    Our everyday conscious experience of the visual world is fundamentally shaped by the interaction of overt visual attention and object awareness. Although the principal impact of both components is undisputed, it is still unclear how they interact. Here we recorded eye-movements preceding and following conscious object recognition, collected during the free inspection of ambiguous and corresponding unambiguous stimuli. Using this paradigm, we demonstrate that fixations recorded prior to object awareness predict the later recognized object identity, and that subjects accumulate more evidence that is consistent with their later percept than for the alternative. The timing of reached awareness was verified by a reaction-time based correction method and also based on changes in pupil dilation. Control experiments, in which we manipulated the initial locus of visual attention, confirm a causal influence of overt attention on the subsequent result of object perception. The current study thus demonstrates that distinct patterns of overt attentional selection precede object awareness and thereby directly builds on recent electrophysiological findings suggesting two distinct neuronal mechanisms underlying the two phenomena. Our results emphasize the crucial importance of overt visual attention in the formation of our conscious experience of the visual world

    Unpacking artificial intelligence – How the building blocks of artificial intelligence (AI) contribute to creating market knowledge from big data

    No full text
    Purpose: This study explains artificial intelligence (AI) and its contributions to creating market knowledge from big data. Specifically, this study describes the foundational building blocks of any AI technology, their interrelationships and the implications of different building blocks with respect to creating market knowledge, along with illustrative examples.   Design/methodology/approach: The study is conceptual and proposes a framework to explicate the phenomenon AI and its building blocks. It further provides a model of how AI contributes to creating market knowledge from big data.   Findings: The study explains AI from an input–processes–output lens and explicates the six foundational building blocks of AI. It discusses how the use of different building blocks transforms data into information and knowledge. It proposes a conceptual model to explicate the role of AI in creating market knowledge and suggests avenues for future research.   Practical implications: This study explains the phenomenon artificial intelligence, how it works and its relevance for creating market knowledge for B2B firms.   Originality/value: The study contributes to the literature on market knowledge and addresses calls for more scholarly research to understand AI and its implication for creating market knowledge.QC 20200228</p

    Unpacking artificial intelligence – How the building blocks of artificial intelligence (AI) contribute to creating market knowledge from big data

    No full text
    Purpose: This study explains artificial intelligence (AI) and its contributions to creating market knowledge from big data. Specifically, this study describes the foundational building blocks of any AI technology, their interrelationships and the implications of different building blocks with respect to creating market knowledge, along with illustrative examples.   Design/methodology/approach: The study is conceptual and proposes a framework to explicate the phenomenon AI and its building blocks. It further provides a model of how AI contributes to creating market knowledge from big data.   Findings: The study explains AI from an input–processes–output lens and explicates the six foundational building blocks of AI. It discusses how the use of different building blocks transforms data into information and knowledge. It proposes a conceptual model to explicate the role of AI in creating market knowledge and suggests avenues for future research.   Practical implications: This study explains the phenomenon artificial intelligence, how it works and its relevance for creating market knowledge for B2B firms.   Originality/value: The study contributes to the literature on market knowledge and addresses calls for more scholarly research to understand AI and its implication for creating market knowledge.QC 20200228</p
    corecore