47 research outputs found

    Visuospatial coding as ubiquitous scaffolding for human cognition

    Get PDF
    For more than 100 years we have known that the visual field is mapped onto the surface of visual cortex, imposing an inherently spatial reference frame on visual information processing. Recent studies highlight visuospatial coding not only throughout visual cortex, but also brain areas not typically considered visual. Such widespread access to visuospatial coding raises important questions about its role in wider cognitive functioning. Here, we synthesise these recent developments and propose that visuospatial coding scaffolds human cognition by providing a reference frame through which neural computations interface with environmental statistics and task demands via perception–action loops

    Modeling the Early Visual System

    Get PDF
    There are two encoding schema present in simple cells in the early visual system of vertebrates: the retinal simple cells activate highly when the receptive field contains a center surround stimulus, while the primary visual cortex’s (V1) simple cells activate highly when the receptive field contains visual edges. Work has been done in the past to enforce constraints on visual machine learning such that the retinal or V1 encoding is learned, but this work is often done to emulate retinal and V1 encoding in a vacuum. Recent work using convolutional neural networks focuses on anatomical constraints along with a supervised objective for training the network to explain the emergent representations of retina and V1 in vertebrates. The model dismisses observations made by other models of retinal processing where robustness to noise and coding efficiency are considered. Moreover, the use of a convolutional architecture explicitly enforce spatial equivariance in the features, which can limit the emergence of other relevant features. Here, we explore a more flexible model. We propose the EVSNet, a fully-connected neural network which learns retinal and V1 features. To analyze the representations learned with this network, we propose a measure called the orientedness to quantitatively discern expected retinal features from expected V1 features

    If deep learning is the answer, then what is the question?

    Full text link
    Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and artificial intelligence (AI) research have opened up new ways of thinking about neural computation. Many researchers are excited by the possibility that deep neural networks may offer theories of perception, cognition and action for biological brains. This perspective has the potential to radically reshape our approach to understanding neural systems, because the computations performed by deep networks are learned from experience, not endowed by the researcher. If so, how can neuroscientists use deep networks to model and understand biological brains? What is the outlook for neuroscientists who seek to characterise computations or neural codes, or who wish to understand perception, attention, memory, and executive functions? In this Perspective, our goal is to offer a roadmap for systems neuroscience research in the age of deep learning. We discuss the conceptual and methodological challenges of comparing behaviour, learning dynamics, and neural representation in artificial and biological systems. We highlight new research questions that have emerged for neuroscience as a direct consequence of recent advances in machine learning.Comment: 4 Figures, 17 Page

    Understanding the retinal basis of vision across species

    Get PDF
    The vertebrate retina first evolved some 500 million years ago in ancestral marine chordates. Since then, the eyes of different species have been tuned to best support their unique visuoecological lifestyles. Visual specializations in eye designs, large-scale inhomogeneities across the retinal surface and local circuit motifs mean that all species' retinas are unique. Computational theories, such as the efficient coding hypothesis, have come a long way towards an explanation of the basic features of retinal organization and function; however, they cannot explain the full extent of retinal diversity within and across species. To build a truly general understanding of vertebrate vision and the retina's computational purpose, it is therefore important to more quantitatively relate different species' retinal functions to their specific natural environments and behavioural requirements. Ultimately, the goal of such efforts should be to build up to a more general theory of vision
    corecore