3,115 research outputs found

    Response variability in balanced cortical networks

    Full text link
    We study the spike statistics of neurons in a network with dynamically balanced excitation and inhibition. Our model, intended to represent a generic cortical column, comprises randomly connected excitatory and inhibitory leaky integrate-and-fire neurons, driven by excitatory input from an external population. The high connectivity permits a mean-field description in which synaptic currents can be treated as Gaussian noise, the mean and autocorrelation function of which are calculated self-consistently from the firing statistics of single model neurons. Within this description, we find that the irregularity of spike trains is controlled mainly by the strength of the synapses relative to the difference between the firing threshold and the post-firing reset level of the membrane potential. For moderately strong synapses we find spike statistics very similar to those observed in primary visual cortex.Comment: 22 pages, 7 figures, submitted to Neural Computatio

    The Effect of synchronized inputs at the single neuron level

    Get PDF
    It is commonly assumed that temporal synchronization of excitatory synaptic inputs onto a single neuron increases its firing rate. We investigate here the role of synaptic synchronization for the leaky integrate-and-fire neuron as well as for a biophysically and anatomically detailed compartmental model of a cortical pyramidal cell. We find that if the number of excitatory inputs, N, is on the same order as the number of fully synchronized inputs necessary to trigger a single action potential, N_t, synchronization always increases the firing rate (for both constant and Poisson-distributed input). However, for large values of N compared to N_t, ''overcrowding'' occurs and temporal synchronization is detrimental to firing frequency. This behavior is caused by the conflicting influence of the low-pass nature of the passive dendritic membrane on the one hand and the refractory period on the other. If both temporal synchronization as well as the fraction of synchronized inputs (Murthy and Fetz 1993) is varied, synchronization is only advantageous if either N or the average input frequency, Æ’(in), are small enough

    Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    Get PDF
    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines, a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. Synaptic sampling machines perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate & fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based synaptic sampling machines outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware

    Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality

    Full text link
    The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., Ï„\tau-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht
    • …
    corecore