60 research outputs found

    Logarithmic distributions prove that intrinsic learning is Hebbian

    Full text link
    In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability

    Effects of Calcium Spikes in the Layer 5 Pyramidal Neuron on Coincidence Detection and Activity Propagation

    Get PDF
    The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level

    A sparse coding model with synaptically local plasticity and spiking neurons can account for the diverse shapes of V1 simple cell receptive fields

    Get PDF
    Sparse coding algorithms trained on natural images can accurately predict the features that excite visual cortical neurons, but it is not known whether such codes can be learned using biologically realistic plasticity rules. We have developed a biophysically motivated spiking network, relying solely on synaptically local information, that can predict the full diversity of V1 simple cell receptive field shapes when trained on natural images. This represents the first demonstration that sparse coding principles, operating within the constraints imposed by cortical architecture, can successfully reproduce these receptive fields. We further prove, mathematically, that sparseness and decorrelation are the key ingredients that allow for synaptically local plasticity rules to optimize a cooperative, linear generative image model formed by the neural representation. Finally, we discuss several interesting emergent properties of our network, with the intent of bridging the gap between theoretical and experimental studies of visual cortex.Comment: 33 pages, 6 figures. To appear in PLoS Computational Biology. Some of these data were presented by author JZ at the 2011 CoSyNe meeting in Salt Lake Cit

    Lognormal firing rate distribution reveals prominent fluctuation-driven regime in spinal motor networks

    Get PDF
    When spinal circuits generate rhythmic movements it is important that the neuronal activity remains within stable bounds to avoid saturation and to preserve responsiveness. Here, we simultaneously record from hundreds of neurons in lumbar spinal circuits of turtles and establish the neuronal fraction that operates within either a ‘mean-driven’ or a ‘fluctuation–driven’ regime. Fluctuation-driven neurons have a ‘supralinear’ input-output curve, which enhances sensitivity, whereas the mean-driven regime reduces sensitivity. We find a rich diversity of firing rates across the neuronal population as reflected in a lognormal distribution and demonstrate that half of the neurons spend at least 50 [Formula: see text] of the time in the ‘fluctuation–driven’ regime regardless of behavior. Because of the disparity in input–output properties for these two regimes, this fraction may reflect a fine trade–off between stability and sensitivity in order to maintain flexibility across behaviors. DOI: http://dx.doi.org/10.7554/eLife.18805.00

    Sparse representation of sounds in the unanesthetized auditory cortex

    Get PDF
    How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli) in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second) in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second). At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons

    What drives information dissemination in continuous double auction markets?.

    Get PDF
    In this paper, we investigate further the way information disseminates from informed to uninformed traders in a market populated by heterogeneous boundedly rational agents. In order to achieve the goal, a computer simulated market where only a small fraction of the population observe the risky asset's fundamental value with noise was constructed, while the rest of agents try to forecast the asset's price from past transaction data. The paper departs from previous studies in that the risky asset does not pay a dividend every period, so agents cannot learn from past transaction prices and subsequent dividend payments. The main finding is that information can potentially disseminate in the market as long as: (1) informed investors' trades tilt transaction prices in the fundamental path direction; and (2) the median investor's expectation is very responsive to transaction prices. Otherwise, markets may display crashes or bubbles. It is found that the first condition requires a minimal amount of informed investors, and is severely limited by short selling and borrowing constraints.

    Scalability of asynchronous networks is limited by one-to-one mapping between effective connectivity and correlations

    Get PDF
    Network models are routinely downscaled because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics is conserved, here we show that this is generally impossible already for second-order statistics. We argue that studies in computational biology need to make the scaling applied explicit, and that results should be verified where possible by full-scale simulations. We consider neuronal networks, where the importance of correlations in network dynamics is obvious because they directly interact with synaptic plasticity, the neuronal basis of learning, but the conclusions are generic. We derive conditions for the preservation of both mean activities and correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. Analytical and simulation results are obtained for networks of binary and networks of leaky integrate-and-fire model neurons, randomly connected with or without delays. The structure of average pairwise correlations in such networks is determined by the effective population-level connectivity. We show that in the absence of symmetries or zeros in the population-level connectivity or correlations, the converse is also true. This is in line with earlier work on inferring connectivity from correlations, but implies that such network reconstruction should be possible for a larger class of networks than hitherto considered. When changing in-degrees, effective connectivity and hence correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a limited range of in-degrees determined by the extrinsic variance. Our results show that the reducibility of asynchronous networks is fundamentally limited
    • …
    corecore