3,137 research outputs found

    A comparative study of different integrate-and-fire neurons: spontaneous activity, dynamical response, and stimulus-induced correlation

    Full text link
    Stochastic integrate-and-fire (IF) neuron models have found widespread applications in computational neuroscience. Here we present results on the white-noise-driven perfect, leaky, and quadratic IF models, focusing on the spectral statistics (power spectra, cross spectra, and coherence functions) in different dynamical regimes (noise-induced and tonic firing regimes with low or moderate noise). We make the models comparable by tuning parameters such that the mean value and the coefficient of variation of the interspike interval match for all of them. We find that, under these conditions, the power spectrum under white-noise stimulation is often very similar while the response characteristics, described by the cross spectrum between a fraction of the input noise and the output spike train, can differ drastically. We also investigate how the spike trains of two neurons of the same kind (e.g. two leaky IF neurons) correlate if they share a common noise input. We show that, depending on the dynamical regime, either two quadratic IF models or two leaky IFs are more strongly correlated. Our results suggest that, when choosing among simple IF models for network simulations, the details of the model have a strong effect on correlation and regularity of the output.Comment: 12 page

    Detecting and Estimating Signals over Noisy and Unreliable Synapses: Information-Theoretic Analysis

    Get PDF
    The temporal precision with which neurons respond to synaptic inputs has a direct bearing on the nature of the neural code. A characterization of the neuronal noise sources associated with different sub-cellular components (synapse, dendrite, soma, axon, and so on) is needed to understand the relationship between noise and information transfer. Here we study the effect of the unreliable, probabilistic nature of synaptic transmission on information transfer in the absence of interaction among presynaptic inputs. We derive theoretical lower bounds on the capacity of a simple model of a cortical synapse under two different paradigms. In signal estimation, the signal is assumed to be encoded in the mean firing rate of the presynaptic neuron, and the objective is to estimate the continuous input signal from the postsynaptic voltage. In signal detection, the input is binary, and the presence or absence of a presynaptic action potential is to be detected from the postsynaptic voltage. The efficacy of information transfer in synaptic transmission is characterized by deriving optimal strategies under these two paradigms. On the basis of parameter values derived from neocortex, we find that single cortical synapses cannot transmit information reliably, but redundancy obtained using a small number of multiple synapses leads to a significant improvement in the information capacity of synaptic transmission

    Are the input parameters of white-noise-driven integrate-and-fire neurons uniquely determined by rate and CV?

    Full text link
    Integrate-and-fire (IF) neurons have found widespread applications in computational neuroscience. Particularly important are stochastic versions of these models where the driving consists of a synaptic input modeled as white Gaussian noise with mean μ\mu and noise intensity DD. Different IF models have been proposed, the firing statistics of which depends nontrivially on the input parameters μ\mu and DD. In order to compare these models among each other, one must first specify the correspondence between their parameters. This can be done by determining which set of parameters (μ\mu, DD) of each model is associated to a given set of basic firing statistics as, for instance, the firing rate and the coefficient of variation (CV) of the interspike interval (ISI). However, it is not clear {\em a priori} whether for a given firing rate and CV there is only one unique choice of input parameters for each model. Here we review the dependence of rate and CV on input parameters for the perfect, leaky, and quadratic IF neuron models and show analytically that indeed in these three models the firing rate and the CV uniquely determine the input parameters

    The complexity of dynamics in small neural circuits

    Full text link
    Mean-field theory is a powerful tool for studying large neural networks. However, when the system is composed of a few neurons, macroscopic differences between the mean-field approximation and the real behavior of the network can arise. Here we introduce a study of the dynamics of a small firing-rate network with excitatory and inhibitory populations, in terms of local and global bifurcations of the neural activity. Our approach is analytically tractable in many respects, and sheds new light on the finite-size effects of the system. In particular, we focus on the formation of multiple branching solutions of the neural equations through spontaneous symmetry-breaking, since this phenomenon increases considerably the complexity of the dynamical behavior of the network. For these reasons, branching points may reveal important mechanisms through which neurons interact and process information, which are not accounted for by the mean-field approximation.Comment: 34 pages, 11 figures. Supplementary materials added, colors of figures 8 and 9 fixed, results unchange

    Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

    Get PDF
    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of FILT in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find FILT to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of FILT to be consistent with that of the highly efficient E-learning Chronotron, but with the distinct advantage that FILT is also implementable as an online method for increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and incorporates reviewer comment

    Fast global oscillations in networks of integrate-and-fire neurons with low firing rates

    Full text link
    We study analytically the dynamics of a network of sparsely connected inhibitory integrate-and-fire neurons in a regime where individual neurons emit spikes irregularly and at a low rate. In the limit when the number of neurons N tends to infinity,the network exhibits a sharp transition between a stationary and an oscillatory global activity regime where neurons are weakly synchronized. The activity becomes oscillatory when the inhibitory feedback is strong enough. The period of the global oscillation is found to be mainly controlled by synaptic times, but depends also on the characteristics of the external input. In large but finite networks, the analysis shows that global oscillations of finite coherence time generically exist both above and below the critical inhibition threshold. Their characteristics are determined as functions of systems parameters, in these two different regimes. The results are found to be in good agreement with numerical simulations.Comment: 45 pages, 11 figures, to be published in Neural Computatio

    A unique method for stochastic models in computational and cognitive neuroscience

    Get PDF
    We review applications of the Fokker–Planck equation for the description of systems with event trains in computational and cognitive neuroscience. The most prominent example is the spike trains generated by integrate-and-fire neurons when driven by correlated (colored) fluctuations, by adaptation currents and/or by other neurons in a recurrent network. We discuss how for a general Gaussian colored noise and an adaptation current can be incorporated into a multidimensional Fokker–Planck equation by Markovian embedding for systems with a fire-and-reset condition and how in particular the spike-train power spectrum can be determined by this equation. We then review how this framework can be used to determine the self-consistent correlation statistics in a recurrent network in which the colored fluctuations arise from the spike trains of statistically similar neurons. We then turn to the popular drift-diffusion models for binary decisions in cognitive neuroscience and demonstrate that very similar Fokker–Planck equations (with two instead of only one threshold) can be used to study the statistics of sequences of decisions. Specifically, we present a novel two-dimensional model that includes an evidence variable and an expectancy variable that can reproduce salient features of key experiments in sequential decision making.Humboldt-Universität zu Berlin (1034)Peer Reviewe

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review
    corecore