340 research outputs found

    Lognormal firing rate distribution reveals prominent fluctuation-driven regime in spinal motor networks

    Get PDF
    When spinal circuits generate rhythmic movements it is important that the neuronal activity remains within stable bounds to avoid saturation and to preserve responsiveness. Here, we simultaneously record from hundreds of neurons in lumbar spinal circuits of turtles and establish the neuronal fraction that operates within either a ‘mean-driven’ or a ‘fluctuation–driven’ regime. Fluctuation-driven neurons have a ‘supralinear’ input-output curve, which enhances sensitivity, whereas the mean-driven regime reduces sensitivity. We find a rich diversity of firing rates across the neuronal population as reflected in a lognormal distribution and demonstrate that half of the neurons spend at least 50 [Formula: see text] of the time in the ‘fluctuation–driven’ regime regardless of behavior. Because of the disparity in input–output properties for these two regimes, this fraction may reflect a fine trade–off between stability and sensitivity in order to maintain flexibility across behaviors. DOI: http://dx.doi.org/10.7554/eLife.18805.00

    Power-Law Inter-Spike Interval Distributions Infer a Conditional Maximization of Entropy in Cortical Neurons

    Get PDF
    The brain is considered to use a relatively small amount of energy for its efficient information processing. Under a severe restriction on the energy consumption, the maximization of mutual information (MMI), which is adequate for designing artificial processing machines, may not suit for the brain. The MMI attempts to send information as accurate as possible and this usually requires a sufficient energy supply for establishing clearly discretized communication bands. Here, we derive an alternative hypothesis for neural code from the neuronal activities recorded juxtacellularly in the sensorimotor cortex of behaving rats. Our hypothesis states that in vivo cortical neurons maximize the entropy of neuronal firing under two constraints, one limiting the energy consumption (as assumed previously) and one restricting the uncertainty in output spike sequences at given firing rate. Thus, the conditional maximization of firing-rate entropy (CMFE) solves a tradeoff between the energy cost and noise in neuronal response. In short, the CMFE sends a rich variety of information through broader communication bands (i.e., widely distributed firing rates) at the cost of accuracy. We demonstrate that the CMFE is reflected in the long-tailed, typically power law, distributions of inter-spike intervals obtained for the majority of recorded neurons. In other words, the power-law tails are more consistent with the CMFE rather than the MMI. Thus, we propose the mathematical principle by which cortical neurons may represent information about synaptic input into their output spike trains

    The impact of spike timing variability on the signal-encoding performance of neural spiking models

    Get PDF
    It remains unclear whether the variability of neuronal spike trains in vivo arises due to biological noise sources or represents highly precise encoding of temporally varying synaptic input signals. Determining the variability of spike timing can provide fundamental insights into the nature of strategies used in the brain to represent and transmit information in the form of discrete spike trains. In this study, we employ a signal estimation paradigm to determine how variability in spike timing affects encoding of random time-varying signals. We assess this for two types of spiking models: an integrate-and-fire model with random threshold and a more biophysically realistic stochastic ion channel model. Using the coding fraction and mutual information as information-theoretic measures, we quantify the efficacy of optimal linear decoding of random inputs from the model outputs and study the relationship between efficacy and variability in the output spike train. Our findings suggest that variability does not necessarily hinder signal decoding for the biophysically plausible encoders examined and that the functional role of spiking variability depends intimately on the nature of the encoder and the signal processing task; variability can either enhance or impede decoding performance

    Effects of random inputs and short-term synaptic plasticity in a LIF conductance model for working memory applications

    Get PDF
    Working memory (WM) has been intensively used to enable the temporary storing of information for processing purposes, playing an important role in the execution of various cognitive tasks. Recent studies have shown that information in WM is not only maintained through persistent recurrent activity but also can be stored in activity-silent states such as in short-term synaptic plasticity (STSP). Motivated by important applications of the STSP mechanisms in WM, the main focus of the present work is on the analysis of the effects of random inputs on a leaky integrate-and-fire (LIF) synaptic conductance neuron under STSP. Furthermore, the irregularity of spike trains can carry the information about previous stimulation in a neuron. A LIF conductance neuron with multiple inputs and coefficient of variation (CV) of the inter-spike-interval (ISI) can bring an output decoded neuron. Our numerical results show that an increase in the standard deviations in the random input current and the random refractory period can lead to an increased irregularity of spike trains of the output neuron

    Probabilistic identification of cerebellar cortical neurones across species.

    Get PDF
    Despite our fine-grain anatomical knowledge of the cerebellar cortex, electrophysiological studies of circuit information processing over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cell types. We approached this problem by assessing the spontaneous activity signatures of identified cerebellar cortical neurones. A range of statistics describing firing frequency and irregularity were then used, individually and in combination, to build Gaussian Process Classifiers (GPC) leading to a probabilistic classification of each neurone type and the computation of equi-probable decision boundaries between cell classes. Firing frequency statistics were useful for separating Purkinje cells from granular layer units, whilst firing irregularity measures proved most useful for distinguishing cells within granular layer cell classes. Considered as single statistics, we achieved classification accuracies of 72.5% and 92.7% for granular layer and molecular layer units respectively. Combining statistics to form twin-variate GPC models substantially improved classification accuracies with the combination of mean spike frequency and log-interval entropy offering classification accuracies of 92.7% and 99.2% for our molecular and granular layer models, respectively. A cross-species comparison was performed, using data drawn from anaesthetised mice and decerebrate cats, where our models offered 80% and 100% classification accuracy. We then used our models to assess non-identified data from awake monkeys and rabbits in order to highlight subsets of neurones with the greatest degree of similarity to identified cell classes. In this way, our GPC-based approach for tentatively identifying neurones from their spontaneous activity signatures, in the absence of an established ground-truth, nonetheless affords the experimenter a statistically robust means of grouping cells with properties matching known cell classes. Our approach therefore may have broad application to a variety of future cerebellar cortical investigations, particularly in awake animals where opportunities for definitive cell identification are limited
    • …
    corecore