4 research outputs found

    Neural system identification for large populations separating "what" and "where"

    Full text link
    Neuroscientists classify neurons into different types that perform similar computations at different locations in the visual field. Traditional methods for neural system identification do not capitalize on this separation of 'what' and 'where'. Learning deep convolutional feature spaces that are shared among many neurons provides an exciting path forward, but the architectural design needs to account for data limitations: While new experimental techniques enable recordings from thousands of neurons, experimental time is limited so that one can sample only a small fraction of each neuron's response space. Here, we show that a major bottleneck for fitting convolutional neural networks (CNNs) to neural data is the estimation of the individual receptive field locations, a problem that has been scratched only at the surface thus far. We propose a CNN architecture with a sparse readout layer factorizing the spatial (where) and feature (what) dimensions. Our network scales well to thousands of neurons and short recordings and can be trained end-to-end. We evaluate this architecture on ground-truth data to explore the challenges and limitations of CNN-based system identification. Moreover, we show that our network model outperforms current state-of-the art system identification models of mouse primary visual cortex.Comment: NIPS 201

    Image Representations in Deep Neural Networks and their Applications to Neural Data Modelling

    Get PDF
    Over the last decade, deep neural networks (DNNs) have become a standard tool in computer vision, allowing us to tackle a variety of problems from classifying objects in natural images to generating new images to predicting brain activity. Such a wide applicability of DNNs is something that these models have in common with the human vision, and exploring some of these similarities is the goal of this thesis. DNNs much like human vision are hierarchical models that process an input scene with a series of sequential computations. It has been shown that typically only a few final computations in this hierarchy are problem-specific, while the rest of them are quite general and applicable to a number of problems. The results of intermediate computations in the DNN are often referred to as image representations and their generality is another similarity to human vision which also has general visual areas (e.g. primary visual cortex) projecting further to the specialised ones solving specific visual tasks. We focus on studying DNN image representations with the goal of understanding what makes them so useful for a variety of visual problems. To do so, we discuss DNNs solving a number of specific computer vision problems and analyse similarities and differences of their image representations. Moreover, we discuss how to build DNNs providing image representations with specific properties which enables us to build a "digital twin" of the mouse primary visual system to be used as a tool for studying the computations in the brain. Taking these results together, we concluded that in general we are still lacking a good understanding of DNN representations. Despite the progress on some specific problems, it still remains largely an open question how the image information is organised in these representations and how to use it for solving arbitrary visual problems. However, we also argue that thinking of DNNs as "digital twins" might be a promising framework for addressing these issues in the future DNN research as they allow us to study image representations by means of computational experiments rather than rely on a priori ideas of how these representations are structured which has proven to be quite challenging
    corecore