25 research outputs found

    Balancing Feed-Forward Excitation and Inhibition via Hebbian Inhibitory Synaptic Plasticity

    Get PDF
    It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates ‘negative feedback’ that balances excitation and inhibition, which contrasts with the ‘positive feedback’ of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs

    Order-Based Representation in Random Networks of Cortical Neurons

    Get PDF
    The wide range of time scales involved in neural excitability and synaptic transmission might lead to ongoing change in the temporal structure of responses to recurring stimulus presentations on a trial-to-trial basis. This is probably the most severe biophysical constraint on putative time-based primitives of stimulus representation in neuronal networks. Here we show that in spontaneously developing large-scale random networks of cortical neurons in vitro the order in which neurons are recruited following each stimulus is a naturally emerging representation primitive that is invariant to significant temporal changes in spike times. With a relatively small number of randomly sampled neurons, the information about stimulus position is fully retrievable from the recruitment order. The effective connectivity that makes order-based representation invariant to time warping is characterized by the existence of stations through which activity is required to pass in order to propagate further into the network. This study uncovers a simple invariant in a noisy biological network in vitro; its applicability under in vivo constraints remains to be seen

    Context Matters: The Illusive Simplicity of Macaque V1 Receptive Fields

    Get PDF
    Even in V1, where neurons have well characterized classical receptive fields (CRFs), it has been difficult to deduce which features of natural scenes stimuli they actually respond to. Forward models based upon CRF stimuli have had limited success in predicting the response of V1 neurons to natural scenes. As natural scenes exhibit complex spatial and temporal correlations, this could be due to surround effects that modulate the sensitivity of the CRF. Here, instead of attempting a forward model, we quantify the importance of the natural scenes surround for awake macaque monkeys by modeling it non-parametrically. We also quantify the influence of two forms of trial to trial variability. The first is related to the neuron’s own spike history. The second is related to ongoing mean field population activity reflected by the local field potential (LFP). We find that the surround produces strong temporal modulations in the firing rate that can be both suppressive and facilitative. Further, the LFP is found to induce a precise timing in spikes, which tend to be temporally localized on sharp LFP transients in the gamma frequency range. Using the pseudo R[superscript 2] as a measure of model fit, we find that during natural scene viewing the CRF dominates, accounting for 60% of the fit, but that taken collectively the surround, spike history and LFP are almost as important, accounting for 40%. However, overall only a small proportion of V1 spiking statistics could be explained (R[superscript 2]~5%), even when the full stimulus, spike history and LFP were taken into account. This suggests that under natural scene conditions, the dominant influence on V1 neurons is not the stimulus, nor the mean field dynamics of the LFP, but the complex, incoherent dynamics of the network in which neurons are embedded.National Institutes of Health (U.S.) (K25 NS052422-02)National Institutes of Health (U.S.) (DP1 ODOO3646

    Learning in spatially extended dendrites

    Get PDF
    Dendrites are not static structures, new synaptic connections are established and old ones disappear. Moreover, it is now known that plasticity can vary with distance from the soma [1]. Consequently it is of great interest to combine learning algorithms with spatially extended neuron models. In particular this may shed further light on the computational advantages of plastic dendrites, say for direction selectivity or coincidence detection. Direction selective neurons fire for one spatio-temporal input sequence on their dendritic tree but stay silent if the temporal order is reversed [2], whilst "coincidence-detectors" such as those in the auditory brainstem are known to make use of dendrites to detect temporal differences in sound arrival times between ears to an astounding accuracy [3]. Here we develop one such combination of learning and dendritic dynamics by extending the "Spike-Diffuse-Spike" [4] framework of an active dendritic tree to incorporate both artificial (tempotron style [5]) and biological learning rules (STDP style [2])

    Temporal compression mediated by short-term synaptic plasticity

    No full text
    Time scales of cortical neuronal dynamics range from few milliseconds to hundreds of milliseconds. In contrast, behavior occurs on the time scale of seconds or longer. How can behavioral time then be neuronally represented in cortical networks? Here, using electrophysiology and modeling, we offer a hypothesis on how to bridge the gap between behavioral and cellular time scales. The core idea is to use a long time constant of decay of synaptic facilitation to translate slow behaviorally induced temporal correlations into a distribution of synaptic response amplitudes. These amplitudes can then be transferred to a sequence of action potentials in a population of neurons. These sequences provide temporal correlations on a millisecond time scale that are able to induce persistent synaptic changes. As a proof of concept, we provide simulations of a neuron that learns to discriminate temporal patterns on a time scale of seconds by synaptic learning rules with a millisecond memory buffer. We find that the conversion from synaptic amplitudes to millisecond correlations can be strongly facilitated by subthreshold oscillations both in terms of information transmission and success of learning
    corecore