926 research outputs found

    A reaction diffusion-like formalism for plastic neural networks reveals dissipative solitons at criticality

    Get PDF
    Self-organized structures in networks with spike-timing dependent plasticity (STDP) are likely to play a central role for information processing in the brain. In the present study we derive a reaction-diffusion-like formalism for plastic feed-forward networks of nonlinear rate neurons with a correlation sensitive learning rule inspired by and being qualitatively similar to STDP. After obtaining equations that describe the change of the spatial shape of the signal from layer to layer, we derive a criterion for the non-linearity necessary to obtain stable dynamics for arbitrary input. We classify the possible scenarios of signal evolution and find that close to the transition to the unstable regime meta-stable solutions appear. The form of these dissipative solitons is determined analytically and the evolution and interaction of several such coexistent objects is investigated

    A unified view on weakly correlated recurrent networks

    Get PDF
    The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The classes differ in the location of additive noise in the rate dynamics, which is on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the presence of conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra

    The perfect integrator driven by Poisson input and its approximation in the diffusion limit

    Get PDF
    In this note we consider the perfect integrator driven by Poisson process input. We derive its equilibrium and response properties and contrast them to the approximations obtained by applying the diffusion approximation. In particular, the probability density in the vicinity of the threshold differs, which leads to altered response properties of the system in equilibrium.Comment: 7 pages, 3 figures, v2: corrected authors in referenc

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)

    Equilibrium and Response Properties of the Integrate-and-Fire Neuron in Discrete Time

    Get PDF
    The integrate-and-fire neuron with exponential postsynaptic potentials is a frequently employed model to study neural networks. Simulations in discrete time still have highest performance at moderate numerical errors, which makes them first choice for long-term simulations of plastic networks. Here we extend the population density approach to investigate how the equilibrium and response properties of the leaky integrate-and-fire neuron are affected by time discretization. We present a novel analytical treatment of the boundary condition at threshold, taking both discretization of time and finite synaptic weights into account. We uncover an increased membrane potential density just below threshold as the decisive property that explains the deviations found between simulations and the classical diffusion approximation. Temporal discretization and finite synaptic weights both contribute to this effect. Our treatment improves the standard formula to calculate the neuron's equilibrium firing rate. Direct solution of the Markov process describing the evolution of the membrane potential density confirms our analysis and yields a method to calculate the firing rate exactly. Knowing the shape of the membrane potential distribution near threshold enables us to devise the transient response properties of the neuron model to synaptic input. We find a pronounced non-linear fast response component that has not been described by the prevailing continuous time theory for Gaussian white noise input

    Fundamental activity constraints lead to specific interpretations of the connectome

    Get PDF
    The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.Comment: J. Schuecker and M. Schmidt contributed equally to this wor

    Structural Plasticity Controlled by Calcium Based Correlation Detection

    Get PDF
    Hebbian learning in cortical networks during development and adulthood relies on the presence of a mechanism to detect correlation between the presynaptic and the postsynaptic spiking activity. Recently, the calcium concentration in spines was experimentally shown to be a correlation sensitive signal with the necessary properties: it is confined to the spine volume, it depends on the relative timing of pre- and postsynaptic action potentials, and it is independent of the spine's location along the dendrite. NMDA receptors are a candidate mediator for the correlation dependent calcium signal. Here, we present a quantitative model of correlation detection in synapses based on the calcium influx through NMDA receptors under realistic conditions of irregular pre- and postsynaptic spiking activity with pairwise correlation. Our analytical framework captures the interaction of the learning rule and the correlation dynamics of the neurons. We find that a simple thresholding mechanism can act as a sensitive and reliable correlation detector at physiological firing rates. Furthermore, the mechanism is sensitive to correlation among afferent synapses by cooperation and competition. In our model this mechanism controls synapse formation and elimination. We explain how synapse elimination leads to firing rate homeostasis and show that the connectivity structure is shaped by the correlations between neighboring inputs

    The Local Field Potential Reflects Surplus Spike Synchrony

    Get PDF
    The oscillatory nature of the cortical local field potential (LFP) is commonly interpreted as a reflection of synchronized network activity, but its relationship to observed transient coincident firing of neurons on the millisecond time-scale remains unclear. Here we present experimental evidence to reconcile the notions of synchrony at the level of neuronal spiking and at the mesoscopic scale. We demonstrate that only in time intervals of excess spike synchrony, coincident spikes are better entrained to the LFP than predicted by the locking of the individual spikes. This effect is enhanced in periods of large LFP amplitudes. A quantitative model explains the LFP dynamics by the orchestrated spiking activity in neuronal groups that contribute the observed surplus synchrony. From the correlation analysis, we infer that neurons participate in different constellations but contribute only a fraction of their spikes to temporally precise spike configurations, suggesting a dual coding scheme of rate and synchrony. This finding provides direct evidence for the hypothesized relation that precise spike synchrony constitutes a major temporally and spatially organized component of the LFP. Revealing that transient spike synchronization correlates not only with behavior, but with a mesoscopic brain signal corroborates its relevance in cortical processing.Comment: 45 pages, 8 figures, 3 supplemental figure

    PyNEST: A Convenient Interface to the NEST Simulator

    Get PDF
    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used

    Surrogate Spike Train Generation Through Dithering in Operational Time

    Get PDF
    Detecting the excess of spike synchrony and testing its significance can not be done analytically for many types of spike trains and relies on adequate surrogate methods. The main challenge for these methods is to conserve certain features of the spike trains, the two most important being the firing rate and the inter-spike interval statistics. In this study we make use of operational time to introduce generalizations to spike dithering and propose two novel surrogate methods which conserve both features with high accuracy. Compared to earlier approaches, the methods show an improved robustness in detecting excess synchrony between spike trains
    corecore