4,048 research outputs found

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    Construction of direction selectivity in V1: from simple to complex cells

    Get PDF
    Despite detailed knowledge about the anatomy and physiology of the primary visual cortex (V1), the immense number of feed-forward and recurrent connections onto a given V1 neuron make it difficult to understand how the physiological details relate to a given neuron’s functional properties. Here, we focus on a well-known functional property of many V1 complex cells: phase-invariant direction selectivity (DS). While the energy model explains its construction at the conceptual level, it remains unclear how the mathematical operations described in this model are implemented by cortical circuits. To understand how DS of complex cells is constructed in cortex, we apply a nonlinear modeling framework to extracellular data from macaque V1. We use a modification of spike-triggered covariance (STC) analysis to identify multiple biologically plausible "spatiotemporal features" that either excite or suppress a cell. We demonstrate that these features represent the true inputs to the neuron more accurately, and the resulting nonlinear model compactly describes how these inputs are combined to result in the functional properties of the cell. In a population of 59 neurons, we find that both simple and complex V1 cells are selective to combinations of excitatory and suppressive motion features. Because the strength of DS and simple/complex classification is well predicted by our models, we can use simulations with inputs matching thalamic and simple cells to assess how individual model components contribute to these measures. Our results unify experimental observations regarding the construction of DS from thalamic feed-forward inputs to V1: based on the differences between excitatory and inhibitory inputs, they suggest a connectivity diagram for simple and complex cells that sheds light on the mechanism underlying the DS of cortical cells. More generally, they illustrate how stage-wise nonlinear combination of multiple features gives rise to the processing of more abstract visual information

    Non-linear Convolution Filters for CNN-based Learning

    Full text link
    During the last years, Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in image classification. Their architectures have largely drawn inspiration by models of the primate visual system. However, while recent research results of neuroscience prove the existence of non-linear operations in the response of complex visual cells, little effort has been devoted to extend the convolution technique to non-linear forms. Typical convolutional layers are linear systems, hence their expressiveness is limited. To overcome this, various non-linearities have been used as activation functions inside CNNs, while also many pooling strategies have been applied. We address the issue of developing a convolution method in the context of a computational model of the visual cortex, exploring quadratic forms through the Volterra kernels. Such forms, constituting a more rich function space, are used as approximations of the response profile of visual cells. Our proposed second-order convolution is tested on CIFAR-10 and CIFAR-100. We show that a network which combines linear and non-linear filters in its convolutional layers, can outperform networks that use standard linear filters with the same architecture, yielding results competitive with the state-of-the-art on these datasets.Comment: 9 pages, 5 figures, code link, ICCV 201

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario

    Neutral theory and scale-free neural dynamics

    Get PDF
    Avalanches of electrochemical activity in brain networks have been empirically reported to obey scale-invariant behavior --characterized by power-law distributions up to some upper cut-off-- both in vitro and in vivo. Elucidating whether such scaling laws stem from the underlying neural dynamics operating at the edge of a phase transition is a fascinating possibility, as systems poised at criticality have been argued to exhibit a number of important functional advantages. Here we employ a well-known model for neural dynamics with synaptic plasticity, to elucidate an alternative scenario in which neuronal avalanches can coexist, overlapping in time, but still remaining scale-free. Remarkably their scale-invariance does not stem from underlying criticality nor self-organization at the edge of a continuous phase transition. Instead, it emerges from the fact that perturbations to the system exhibit a neutral drift --guided by demographic fluctuations-- with respect to endogenous spontaneous activity. Such a neutral dynamics --similar to the one in neutral theories of population genetics-- implies marginal propagation of activity, characterized by power-law distributed causal avalanches. Importantly, our results underline the importance of considering causal information --on which neuron triggers the firing of which-- to properly estimate the statistics of avalanches of neural activity. We discuss the implications of these findings both in modeling and to elucidate experimental observations, as well as its possible consequences for actual neural dynamics and information processing in actual neural networks.Comment: Main text: 8 pages, 3 figures. Supplementary information: 5 pages, 4 figure

    Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    Get PDF
    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity

    Brain Learning and Recognition: The Large and the Small of It in Inferotemporal Cortex

    Full text link
    Anterior inferotemporal cortex (ITa) plays a key role in visual object recognition. Recognition is tolerant to object position, size, and view changes, yet recent neurophysiological data show ITa cells with high object selectivity often have low position tolerance, and vice versa. A neural model learns to simulate both this tradeoff and ITa responses to image morphs using large-scale and small-scale IT cells whose population properties may support invariant recognition.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Characterization of response properties and connectivity in mouse visual thalamus and cortex

    Get PDF
    How neuronal activity is shaped by circuit connectivity between neuronal populations is a central question in visual neuroscience. Combined with experimental data, computational models allow causal investigation and prediction of both how connectivity influences activity and how activity constrains connectivity. In order to develop and refine these computational models of the visual system, thorough characterization of neuronal response patterns is required. In this thesis, I first present an approach to infer connectivity from in vivo stimulus responses in mouse visual cortex, revealing underlying principles of connectivity between excitatory and inhibitory neurons. Second, I investigate suppressed-by-contrast neurons, which, while known since the 1960s, still remain to be included in standard models of visual function. I present a characterization of intrinsic firing properties and stimulus responses that expands the knowledge about this obscure neuron type. Inferring the neuronal connectome from neural activity is a major objective of computational connectomics. Complementary to direct experimental investigation of connectivity, inference approaches combine simultaneous activity data of individual neurons with methods ranging from statistical considerations of similarity to large-scale simulations of neuronal networks. However, due to the mathematically ill-defined nature of inferring connectivity from in vivo activity, most approaches have to constrain the inference procedure using experimental findings that are not part of the neural activity data set at hand. Combining the stabilized-supralinear network model with response data from the visual thalamus and cortex of mice, my collaborators and I have found a way to infer connectivity from in vivo data alone. Leveraging a property of neural responses known as contrast-invariance of orientation tuning, our inference approach reveals a consistent order of connection strengths between cortical neuron populations as well as tuning differences between thalamic inputs and cortex. Throughout the history of visual neuroscience, neurons that respond to a visual stimulus with an increase in firing have been at the center of attention. A different response type that decreases its activity in response to visual stimuli, however, has been only sparsely investigated. Consequently, these suppressed-by-contrast neurons, while recently receiving renewed attention from researchers, have not been characterized in depth. Together with my collaborators, I have conducted a survey of SbC properties covering firing reliability, cortical location, and tuning to stimulus orientation. We find SbC neurons to fire less regularly than expected, be located in the lower parts of cortex, and show significant tuning to oriented gratings
    • …
    corecore