1,258 research outputs found

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    General-purpose and special-purpose visual systems

    Get PDF
    The information that eyes supply supports a wide variety of functions, from the guidance systems that enable an animal to navigate successfully around the environment, to the detection and identification of predators, prey, and conspecifics. The eyes with which we are most familiar the single-chambered eyes of vertebrates and cephalopod molluscs, and the compound eyes of insects and higher crustaceans allow these animals to perform the full range of visual tasks. These eyes have evidently evolved in conjunction with brains that are capable of subjecting the raw visual information to many different kinds of analysis, depending on the nature of the task that the animal is engaged in. However, not all eyes evolved to provide such comprehensive information. For example, in bivalve molluscs we find eyes of very varied design (pinholes, concave mirrors, and apposition compound eyes) whose only function is to detect approaching predators and thereby allow the animal to protect itself by closing its shell. Thus, there are special-purpose eyes as well as eyes with multiple functions

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Sensory Mapping in Zebrin-positive Modules in the Cerebellum

    Get PDF

    Sensory Mapping in Zebrin-positive Modules in the Cerebellum

    Get PDF

    Whiskers, barrels and cortical efferent pathways in gap crossing by rats

    Get PDF
    Rats can readily be trained to jump a gap of around 30 cm in the light and 16 cm in the dark for a food reward. In the light they use vision to estimate the distance to be jumped. In the dark they use their vibrissae at the farthest distances. Bilateral whisker shaving or barrel field lesions reduce the gap crossed in the dark by about 2 cm (Hutson and Masterton, 1986). Information from the barrel fields reaches motor areas via the cortico-cortical, basal ganglia, or cerebellar pathways. The cells of origin of the pontocerebellar pathway are segregated in layer Vb of the barrel field (Mercier et al., 1990). Efferent axons of Vb cells occupy a central position within the basis pedunculi, and terminate on cells in the pontine nuclei (Glickstein et al., 1992). Pontine cells, in turn, project to the cerebellar cortex as mossy fibres. We trained normal rats to cross a gap in the light and in a dark alley that was illuminated with an infra-red source. When the performance was stable we made unilateral lesions in the central region of the basis pedunculi which interrupted connections from the barrel field to the pons whilst leaving cortico-cortical and basal ganglia pathways intact. Whisking was not affected on either side by the lesion and the rats with unilateral peduncle lesions crossed gaps of the same distance as they did pre-operatively. Shaving the whiskers in the side of the face that retains its input to the pontine nuclei reduced the maximal gap jumped in the dark by the same amount as bilateral whisker shaving. Performance in the light was not affected. Re-growth of the shaved whiskers was associated with the recovery of the maximum distance crossed in the dark. In control cases, shaving the whiskers on the other side of the face did not reduce the distance jumped in the dark or in the light. These results suggests that the cerebellum must receive whisker information from the barrel fields from the barrel fields for whisker-guided jumps

    Quantification of neural substrates of vergence system via fMRI

    Get PDF
    Vergence eye movement is one of the oculomotor systems which allow depth perception via disconjugate movement of the eyes. Neuroimaging methods such as functional magnetic resonance imaging (fMRI) measure neural activity changes activity in the brain while subjects perform experimental tasks. A rich body of primate investigations on vergence is already established in the neurophysiology literature; on the other hand, there are a limited number of fMRI studies on neural mechanisms behind the vergence system. The results demonstrated that vergence system shares neural sources and also shows differentiation within the boundaries of frontal eye fields (FEF) and midbrain of the brainstem in comparison to saccadic, rapid conjugate eye movements, system with application of simple tracking experiment. Functional activity within the FEF was located anterior to the saccadic functional activity (z \u3e 2.3; p \u3c 0.03). Functional activity within the midbrain was observed as a result of application of vergence task, but not for the saccade data set. The novel memory-guided vergence experiment also showed a relationship between posterior parahippocampal area and memory where two other experiments were implemented for comparison of memory load in this region. Significant percent change in the functional activity was observed for the posterior parahippocampal area. Furthermore, an increase in the interconnectivity was observed for vergence tasks via utilization of Granger-Causality Analysis. When prediction was involved the increase in the number of causal interactions was statistically significant (p\u3c 0.05). The comparison of the number of influences between prediction-evoked vergence task and simple tracking vergence task was also statistically significant for these two experimental paradigms, p \u3c 0.0001. Another result observed in this dissertation was the application of hierarchical independent component analysis from to the fronto-parietal and cerebellar components within saccade and vergence tasks. Interestingly, cerebellar component showed delayed latency in the group level signal in comparison to fronto-parietal group level signals, which was evaluated to determine why segregation existed between the components acquired from the implementation of independent component analysis. Lastly, region of interet (ROI) based analysis in comparison to global (whole) brain analysis indicated more sensitive results on frontal, parietal, brainstem and occipital areas at both individual and group levels. Overall, the purpose of this dissertation was to investigate neural control of vergence movements by 1-spatial mapping of vergence induced functional activity, 2- applying different signal processing methods to quantify neural correlates of the vergence system at causal functional connectivity, underlying sources and region of interests (ROI) based levels. It was concluded that quantification of vergence movements via fMRI can build a synergy with behavioral investigations and may also shed light on neural differentiation between healthy individuals and patients with neural dysfunctions and injuries by serving as a biomarker

    Theory of representation learning in cortical neural networks

    Get PDF
    Our brain continuously self-organizes to construct and maintain an internal representation of the world based on the information arriving through sensory stimuli. Remarkably, cortical areas related to different sensory modalities appear to share the same functional unit, the neuron, and develop through the same learning mechanism, synaptic plasticity. It motivates the conjecture of a unifying theory to explain cortical representational learning across sensory modalities. In this thesis we present theories and computational models of learning and optimization in neural networks, postulating functional properties of synaptic plasticity that support the apparent universal learning capacity of cortical networks. In the past decades, a variety of theories and models have been proposed to describe receptive field formation in sensory areas. They include normative models such as sparse coding, and bottom-up models such as spike-timing dependent plasticity. We bring together candidate explanations by demonstrating that in fact a single principle is sufficient to explain receptive field development. First, we show that many representative models of sensory development are in fact implementing variations of a common principle: nonlinear Hebbian learning. Second, we reveal that nonlinear Hebbian learning is sufficient for receptive field formation through sensory inputs. A surprising result is that our findings are independent of specific details, and allow for robust predictions of the learned receptive fields. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. The Hebbian learning theory substantiates that synaptic plasticity can be interpreted as an optimization procedure, implementing stochastic gradient descent. In stochastic gradient descent inputs arrive sequentially, as in sensory streams. However, individual data samples have very little information about the correct learning signal, and it becomes a fundamental problem to know how many samples are required for reliable synaptic changes. Through estimation theory, we develop a novel adaptive learning rate model, that adapts the magnitude of synaptic changes based on the statistics of the learning signal, enabling an optimal use of data samples. Our model has a simple implementation and demonstrates improved learning speed, making this a promising candidate for large artificial neural network applications. The model also makes predictions on how cortical plasticity may modulate synaptic plasticity for optimal learning. The optimal sampling size for reliable learning allows us to estimate optimal learning times for a given model. We apply this theory to derive analytical bounds on times for the optimization of synaptic connections. First, we show this optimization problem to have exponentially many saddle-nodes, which lead to small gradients and slow learning. Second, we show that the number of input synapses to a neuron modulates the magnitude of the initial gradient, determining the duration of learning. Our final result reveals that the learning duration increases supra-linearly with the number of synapses, suggesting an effective limit on synaptic connections and receptive field sizes in developing neural networks
    • …
    corecore