26 research outputs found

    Synchronization and oscillatory dynamics in heterogeneous mutually inhibited neurons

    Full text link
    We study some mechanisms responsible for synchronous oscillations and loss of synchrony at physiologically relevant frequencies (10-200 Hz) in a network of heterogeneous inhibitory neurons. We focus on the factors that determine the level of synchrony and frequency of the network response, as well as the effects of mild heterogeneity on network dynamics. With mild heterogeneity, synchrony is never perfect and is relatively fragile. In addition, the effects of inhibition are more complex in mildly heterogeneous networks than in homogeneous ones. In the former, synchrony is broken in two distinct ways, depending on the ratio of the synaptic decay time to the period of repetitive action potentials (Ď„s/T\tau_s/T), where TT can be determined either from the network or from a single, self-inhibiting neuron. With Ď„s/T>2\tau_s/T > 2, corresponding to large applied current, small synaptic strength or large synaptic decay time, the effects of inhibition are largely tonic and heterogeneous neurons spike relatively independently. With Ď„s/T<1\tau_s/T < 1, synchrony breaks when faster cells begin to suppress their less excitable neighbors; cells that fire remain nearly synchronous. We show numerically that the behavior of mildly heterogeneous networks can be related to the behavior of single, self-inhibiting cells, which can be studied analytically.Comment: 17 pages, 6 figures, Kluwer.sty. Journal of Compuational Neuroscience (in press). Originally submitted to the neuro-sys archive which was never publicly announced (was 9802001

    Statistical-Mechanical Measure of Stochastic Spiking Coherence in A Population of Inhibitory Subthreshold Neurons

    Full text link
    By varying the noise intensity, we study stochastic spiking coherence (i.e., collective coherence between noise-induced neural spikings) in an inhibitory population of subthreshold neurons (which cannot fire spontaneously without noise). This stochastic spiking coherence may be well visualized in the raster plot of neural spikes. For a coherent case, partially-occupied "stripes" (composed of spikes and indicating collective coherence) are formed in the raster plot. This partial occupation occurs due to "stochastic spike skipping" which is well shown in the multi-peaked interspike interval histogram. The main purpose of our work is to quantitatively measure the degree of stochastic spiking coherence seen in the raster plot. We introduce a new spike-based coherence measure MsM_s by considering the occupation pattern and the pacing pattern of spikes in the stripes. In particular, the pacing degree between spikes is determined in a statistical-mechanical way by quantifying the average contribution of (microscopic) individual spikes to the (macroscopic) ensemble-averaged global potential. This "statistical-mechanical" measure MsM_s is in contrast to the conventional measures such as the "thermodynamic" order parameter (which concerns the time-averaged fluctuations of the macroscopic global potential), the "microscopic" correlation-based measure (based on the cross-correlation between the microscopic individual potentials), and the measures of precise spike timing (based on the peri-stimulus time histogram). In terms of MsM_s, we quantitatively characterize the stochastic spiking coherence, and find that MsM_s reflects the degree of collective spiking coherence seen in the raster plot very well. Hence, the "statistical-mechanical" spike-based measure MsM_s may be used usefully to quantify the degree of stochastic spiking coherence in a statistical-mechanical way.Comment: 16 pages, 5 figures, to appear in the J. Comput. Neurosc

    Frequency control in synchronized networks of inhibitory neurons

    Full text link
    We analyze the control of frequency for a synchronized inhibitory neuronal network. The analysis is done for a reduced membrane model with a biophysically-based synaptic influence. We argue that such a reduced model can quantitatively capture the frequency behavior of a larger class of neuronal models. We show that in different parameter regimes, the network frequency depends in different ways on the intrinsic and synaptic time constants. Only in one portion of the parameter space, called `phasic', is the network period proportional to the synaptic decay time. These results are discussed in connection with previous work of the authors, which showed that for mildly heterogeneous networks, the synchrony breaks down, but coherence is preserved much more for systems in the phasic regime than in the other regimes. These results imply that for mildly heterogeneous networks, the existence of a coherent rhythm implies a linear dependence of the network period on synaptic decay time, and a much weaker dependence on the drive to the cells. We give experimental evidence for this conclusion.Comment: 18 pages, 3 figures, Kluwer.sty. J. Comp. Neurosci. (in press). Originally submitted to the neuro-sys archive which was never publicly announced (was 9803001

    Phase locking in networks of synaptically coupled McKean relaxation oscillators

    Get PDF
    We use geometric dynamical systems methods to derive phase equations for networks of weakly connected McKean relaxation oscillators. We derive an explicit formula for the connection function when the oscillators are coupled with chemical synapses modeled as the convolution of some input spike train with an appropriate synaptic kernel. The theory allows the systematic investigation of the way in which a slow recovery variable can interact with synaptic time scales to produce phase-locked solutions in networks of pulse coupled neural relaxation oscillators. The theory is exact in the singular limit that the fast and slow time scales of the neural oscillator become effectively independent. By focusing on a pair of mutually coupled McKean oscillators with alpha function synaptic kernels, we clarify the role that fast and slow synapses of excitatory and inhibitory type can play in producing stable phase-locked rhythms. In particular we show that for fast excitatory synapses there is coexistence of a stable synchronous, a stable anti-synchronous, and one stable asynchronous solution. For slower synapses the anti-synchronous solution can lose stability, whilst for even slower synapses it can regain stability. The case of inhibitory synapses is similar up to a reversal of the stability of solution branches. Using a return-map analysis the case of strong pulsatile coupling is also considered. In this case it is shown that the synchronous solution can co-exist with a continuum of asynchronous states

    Stochastic Representations of Ion Channel Kinetics and Exact Stochastic Simulation of Neuronal Dynamics

    Full text link
    In this paper we provide two representations for stochastic ion channel kinetics, and compare the performance of exact simulation with a commonly used numerical approximation strategy. The first representation we present is a random time change representation, popularized by Thomas Kurtz, with the second being analogous to a "Gillespie" representation. Exact stochastic algorithms are provided for the different representations, which are preferable to either (a) fixed time step or (b) piecewise constant propensity algorithms, which still appear in the literature. As examples, we provide versions of the exact algorithms for the Morris-Lecar conductance based model, and detail the error induced, both in a weak and a strong sense, by the use of approximate algorithms on this model. We include ready-to-use implementations of the random time change algorithm in both XPP and Matlab. Finally, through the consideration of parametric sensitivity analysis, we show how the representations presented here are useful in the development of further computational methods. The general representations and simulation strategies provided here are known in other parts of the sciences, but less so in the present setting.Comment: 39 pages, 6 figures, appendix with XPP and Matlab cod

    Stimulus competition by inhibitory interference

    Get PDF
    When two stimuli are present in the receptive field of a V4 neuron, the firing rate response is between the weakest and strongest response elicited by each of the stimuli alone (Reynolds et al, 1999, Journal of Neuroscience 19:1736-1753). When attention is directed towards the stimulus eliciting the strongest response (the preferred stimulus), the response to the pair is increased, whereas the response decreases when attention is directed to the other stimulus (the poor stimulus). These experimental results were reproduced in a model of a V4 neuron under the assumption that attention modulates the activity of local interneuron networks. The V4 model neuron received stimulus-specific asynchronous excitation from V2 and synchronous inhibitory inputs from two local interneuron networks in V4. Each interneuron network was driven by stimulus-specific excitatory inputs from V2 and was modulated by a projection from the frontal eye fields. Stimulus competition was present because of a delay in arrival time of synchronous volleys from each interneuron network. For small delays, the firing rate was close to the rate elicited by the preferred stimulus alone, whereas for larger delays it approached the firing rate of the poor stimulus. When either stimulus was presented alone the neuron's response was not altered by the change in delay. The model suggests that top-down attention biases the competition between V2 columns for control of V4 neurons by changing the relative timing of inhibition rather than by changes in the degree of synchrony of interneuron networks. The mechanism proposed here for attentional modulation of firing rate - gain modulation by inhibitory interference - is likely to have more general applicability to cortical information processing.Comment: 20 pages, 7 figures, 1 tabl

    Minimal Size of Cell Assemblies Coordinated by Gamma Oscillations

    Get PDF
    In networks of excitatory and inhibitory neurons with mutual synaptic coupling, specific drive to sub-ensembles of cells often leads to gamma-frequency (25–100 Hz) oscillations. When the number of driven cells is too small, however, the synaptic interactions may not be strong or homogeneous enough to support the mechanism underlying the rhythm. Using a combination of computational simulation and mathematical analysis, we study the breakdown of gamma rhythms as the driven ensembles become too small, or the synaptic interactions become too weak and heterogeneous. Heterogeneities in drives or synaptic strengths play an important role in the breakdown of the rhythms; nonetheless, we find that the analysis of homogeneous networks yields insight into the breakdown of rhythms in heterogeneous networks. In particular, if parameter values are such that in a homogeneous network, it takes several gamma cycles to converge to synchrony, then in a similar, but realistically heterogeneous network, synchrony breaks down altogether. This leads to the surprising conclusion that in a network with realistic heterogeneity, gamma rhythms based on the interaction of excitatory and inhibitory cell populations must arise either rapidly, or not at all. For given synaptic strengths and heterogeneities, there is a (soft) lower bound on the possible number of cells in an ensemble oscillating at gamma frequency, based simply on the requirement that synaptic interactions between the two cell populations be strong enough. This observation suggests explanations for recent experimental results concerning the modulation of gamma oscillations in macaque primary visual cortex by varying spatial stimulus size or attention level, and for our own experimental results, reported here, concerning the optogenetic modulation of gamma oscillations in kainate-activated hippocampal slices. We make specific predictions about the behavior of pyramidal cells and fast-spiking interneurons in these experiments.Collaborative Research in Computational NeuroscienceNational Institutes of Health (U.S.) (grant 1R01 NS067199)National Institutes of Health (U.S.) (grant DMS 0717670)National Institutes of Health (U.S.) (grant 1R01 DA029639)National Institutes of Health (U.S.) (grant 1RC1 MH088182)National Institutes of Health (U.S.) (grant DP2OD002002)Paul G. Allen Family FoundationnGoogle (Firm

    Uncertainty propagation in neuronal dynamical systems

    Get PDF
    One of the most notorious characteristics of neuronal electrical activity is its variability, whose origin is not just instrumentation noise, but mainly the intrinsically stochastic nature of neural computations. Neuronal models based on deterministic differential equations cannot account for such variability, but they can be extended to do so by incorporating random components. However, the computational cost of this strategy and the storage requirements grow exponentially with the number of stochastic parameters, quickly exceeding the capacities of current supercomputers. This issue is critical in Neurodynamics, where mechanistic interpretation of large, complex, nonlinear systems is essential. In this paper we present accurate and computationally efficient methods to introduce and analyse variability in neurodynamic models depending on multiple uncertain parameters. Their use is illustrated with relevant example
    corecore