53 research outputs found
The Role of Degree Distribution in Shaping the Dynamics in Networks of Sparsely Connected Spiking Neurons
Neuronal network models often assume a fixed probability of connection between neurons. This assumption leads to random networks with binomial in-degree and out-degree distributions which are relatively narrow. Here I study the effect of broad degree distributions on network dynamics by interpolating between a binomial and a truncated power-law distribution for the in-degree and out-degree independently. This is done both for an inhibitory network (I network) as well as for the recurrent excitatory connections in a network of excitatory and inhibitory neurons (EI network). In both cases increasing the width of the in-degree distribution affects the global state of the network by driving transitions between asynchronous behavior and oscillations. This effect is reproduced in a simplified rate model which includes the heterogeneity in neuronal input due to the in-degree of cells. On the other hand, broadening the out-degree distribution is shown to increase the fraction of common inputs to pairs of neurons. This leads to increases in the amplitude of the cross-correlation (CC) of synaptic currents. In the case of the I network, despite strong oscillatory CCs in the currents, CCs of the membrane potential are low due to filtering and reset effects, leading to very weak CCs of the spike-count. In the asynchronous regime of the EI network, broadening the out-degree increases the amplitude of CCs in the recurrent excitatory currents, while CC of the total current is essentially unaffected as are pairwise spiking correlations. This is due to a dynamic balance between excitatory and inhibitory synaptic currents. In the oscillatory regime, changes in the out-degree can have a large effect on spiking correlations and even on the qualitative dynamical state of the network
Recommended from our members
Efficient Partitioning of Memory Systems and Its Importance for Memory Consolidation
Long-term memories are likely stored in the synaptic weights of neuronal networks in the brain. The storage capacity of such networks depends on the degree of plasticity of their synapses. Highly plastic synapses allow for strong memories, but these are quickly overwritten. On the other hand, less labile synapses result in long-lasting but weak memories. Here we show that the trade-off between memory strength and memory lifetime can be overcome by partitioning the memory system into multiple regions characterized by different levels of synaptic plasticity and transferring memory information from the more to less plastic region. The improvement in memory lifetime is proportional to the number of memory regions, and the initial memory strength can be orders of magnitude larger than in a non-partitioned memory system. This model provides a fundamental computational reason for memory consolidation processes at the systems level
Oscillations in the bistable regime of neuronal networks
Bistability between attracting fixed points in neuronal networks has been hypothesized to underlie persistent activity observed in several cortical areas during working memory tasks. In network models this kind of bistability arises due to strong recurrent excitation, sufficient to generate a state of high activity created in a saddle-node (SN) bifurcation. On the other hand, canonical network models of excitatory and inhibitory neurons (E-I networks) robustly produce oscillatory states via a Hopf (H) bifurcation due to the E-I loop. This mechanism for generating oscillations has been invoked to explain the emergence of brain rhythms in the ? to ? bands. Although both bistability and oscillatory activity have been intensively studied in network models, there has not been much focus on the coincidence of the two. Here we show that when oscillations emerge in E-I networks in the bistable regime, their phenomenology can be explained to a large extent by considering coincident SN and H bifurcations, known as a codimension two Takens-Bogdanov bifurcation. In particular, we find that such oscillations are not composed of a stable limit cycle, but rather are due to noise-driven oscillatory fluctuations. Furthermore, oscillations in the bistable regime can, in principle, have arbitrarily low frequency
Many Attractors, Long Chaotic Transients, and Failure in Small-World Networks of Excitable Neurons
We study the dynamical states that emerge in a small-world network of
recurrently coupled excitable neurons through both numerical and analytical
methods. These dynamics depend in large part on the fraction of long-range
connections or `short-cuts' and the delay in the neuronal interactions.
Persistent activity arises for a small fraction of `short-cuts', while a
transition to failure occurs at a critical value of the `short-cut' density.
The persistent activity consists of multi-stable periodic attractors, the
number of which is at least on the order of the number of neurons in the
network. For long enough delays, network activity at high `short-cut' densities
is shown to exhibit exceedingly long chaotic transients whose failure-times
averaged over many network configurations follow a stretched exponential. We
show how this functional form arises in the ensemble-averaged activity if each
network realization has a characteristic failure-time which is exponentially
distributed.Comment: 14 pages 23 figure
Theta-modulation drives the emergence of connectivity patterns underlying replay in a network model of place cells
Place cells of the rodent hippocampus fire action potentials when the animal traverses a particular spatial location in any environment. Therefore for any given trajectory one observes a repeatable sequence of place cell activations. When the animal is quiescent or sleeping, one can observe similar sequences of activation known as replay, which underlie the process of memory consolidation. However, it remains unclear how replay is generated. Here we show how a temporally asymmetric plasticity rule during spatial exploration gives rise to spontaneous replay in a model network by shaping the recurrent connectivity to reflect the topology of the learned environment. Crucially, the rate of this encoding is strongly modulated by ongoing rhythms. Oscillations in the theta range optimize learning by generating repeated pre-post pairings on a time-scale commensurate with the window for plasticity, while lower and higher frequencies generate learning rates which are lower by orders of magnitude
Theta-modulation drives the emergence of connectivity patterns underlying replay in a network model of place cells
Place cells of the rodent hippocampus fire action potentials when the animal traverses a particular spatial location in any environment. Therefore for any given trajectory one observes a repeatable sequence of place cell activations. When the animal is quiescent or sleeping, one can observe similar sequences of activation known as replay, which underlie the process of memory consolidation. However, it remains unclear how replay is generated. Here we show how a temporally asymmetric plasticity rule during spatial exploration gives rise to spontaneous replay in a model network by shaping the recurrent connectivity to reflect the topology of the learned environment. Crucially, the rate of this encoding is strongly modulated by ongoing rhythms. Oscillations in the theta range optimize learning by generating repeated pre-post pairings on a time-scale commensurate with the window for plasticity, while lower and higher frequencies generate learning rates which are lower by orders of magnitude
Exact firing rate model reveals the differential effects of chemical versus electrical synapses in spiking networks
Chemical and electrical synapses shape the dynamics of neuronal networks. Numerous theoretical studies have investigated how each of these types of synapses contributes to the generation of neuronal oscillations, but their combined effect is less understood. This limitation is further magnified by the impossibility of traditional neuronal mean-field models—also known as firing rate models or firing rate equations—to account for electrical synapses. Here, we introduce a firing rate model that exactly describes the mean-field dynamics of heterogeneous populations of quadratic integrate-and-fire (QIF) neurons with both chemical and electrical synapses. The mathematical analysis of the firing rate model reveals a well-established bifurcation scenario for networks with chemical synapses, characterized by a codimension-2 cusp point and persistent states for strong recurrent excitatory coupling. The inclusion of electrical coupling generally implies neuronal synchrony by virtue of a supercritical Hopf bifurcation. This transforms the cusp scenario into a bifurcation scenario characterized by three codimension-2 points (cusp, Takens-Bogdanov, and saddle-node separatrix loop), which greatly reduces the possibility for persistent states. This is generic for heterogeneous QIF networks with both chemical and electrical couplings. Our results agree with several numerical studies on the dynamics of large networks of heterogeneous spiking neurons with electrical and chemical couplings
UP-DOWN cortical dynamics reflect state transitions in a bistable network.
In the idling brain, neuronal circuits transition between periods of sustained firing (UP state) and quiescence (DOWN state), a pattern the mechanisms of which remain unclear. Here we analyzed spontaneous cortical population activity from anesthetized rats and found that UP and DOWN durations were highly variable and that population rates showed no significant decay during UP periods. We built a network rate model with excitatory (E) and inhibitory (I) populations exhibiting a novel bistable regime between a quiescent and an inhibition-stabilized state of arbitrarily low rate. Fluctuations triggered state transitions, while adaptation in E cells paradoxically caused a marginal decay of E-rate but a marked decay of I-rate in UP periods, a prediction that we validated experimentally. A spiking network implementation further predicted that DOWN-to-UP transitions must be caused by synchronous high-amplitude events. Our findings provide evidence of bistable cortical networks that exhibit non-rhythmic state transitions when the brain rests
The role of fixed delays in neuronal rate models
Fixed delays in neuronal interactions arise through synaptic and dendritic processing. Previous work has shown that such delays, which play an important role in shaping the dynamics of networks of large numbers of spiking neurons with continuous synaptic kinetics, can be taken into account with a rate model through the addition of an explicit, fixed delay. Here we extend this work to account for arbitrary symmetric patterns of synaptic connectivity and generic nonlinear transfer functions. Specifically, we conduct a weakly nonlinear analysis of the dynamical states arising via primary instabilities of the stationary uniform state. In this way we determine analytically how the nature and stability of these states depend on the choice of transfer function and connectivity. While this dependence is, in general, nontrivial, we make use of the smallness of the ratio in the delay in neuronal interactions to the effective time constant of integration to arrive at two general observations of physiological relevance. These are: 1 - fast oscillations are always supercritical for realistic transfer functions. 2 - Traveling waves are preferred over standing waves given plausible patterns of local connectivity
- …