18 research outputs found

    Dynamics of Coupled Noisy Neural Oscillators with Heterogeneous Phase Resetting Curves

    Get PDF
    Pulse-coupled phase oscillators have been utilized in a variety of contexts. Motivated by neuroscience, we study a network of pulse-coupled phase oscillators receiving independent and correlated noise. An additional physiological attribute, heterogeneity, is incorporated in the phase resetting curve (PRC), which is a vital entity for modeling the biophysical dynamics of oscillators. An accurate probability density or mean field description is large dimensional, requiring reduction methods for tractability. We present a reduction method to capture the pairwise synchrony via the probability density of the phase differences, and explore the robustness of the method. We find the reduced methods can capture some of the synchronous dynamics in these networks. The variance of the noisy period (or spike times) in this network is also considered. In particular, we find phase oscillators with predominately positive PRCs (type 1) have larger variance with inhibitory pulse- coupling than PRCs with a larger negative regions (type 2), but with excitatory pulse-coupling the opposite happens – type 1 oscillators have lower variability than type 2. Analysis of this phenomena is provided via an asymptotic approximation with weak noise and weak coupling, where we demonstrate how the individual PRC alters variability with pulse-coupling. We make comparisons of the phase oscillators to full oscillator networks and discuss the utility and shortcomings

    Dispersal and noise: Various modes of synchrony in\ud ecological oscillators

    Get PDF
    We use the theory of noise-induced phase synchronization to analyze the effects of dispersal on the synchronization of a pair of predator-prey systems within a fluctuating environment (Moran effect). Assuming that each isolated local population acts as a limit cycle oscillator in the deterministic limit, we use phase reduction and averaging methods to derive a Fokker–Planck equation describing the evolution of the probability density for pairwise phase differences between the oscillators. In the case of common environmental noise, the oscillators ultimately synchronize. However the approach to synchrony depends on whether or not dispersal in the absence of noise supports any stable asynchronous states. We also show how the combination of correlated (shared) and uncorrelated (unshared) noise with dispersal can lead to a multistable\ud steady-state probability density

    Synchronization of stochastic hybrid oscillators driven by a common switching environment

    Get PDF
    Many systems in biology, physics and chemistry can be modeled through ordinary differential equations, which are piecewise smooth, but switch between different states according to a Markov jump process. In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper we suppose that this limit ODE supports a stable limit cycle. We demonstrate that a set of such oscillators can synchronize when they are uncoupled, but they share the same switching Markov jump process. The latter is taken to represent the effect of a common randomly switching environment. We determine the leading order of the Lyapunov coefficient governing the rate of decay of the phase difference in the fast switching limit. The analysis bears some similarities to the classical analysis of synchronization of stochastic oscillators subject to common white noise. However the discrete nature of the Markov jump process raises some difficulties: in fact we find that the Lyapunov coefficient from the quasi-steady-state approximation differs from the Lyapunov coefficient one obtains from a second order perturbation expansion in the waiting time between jumps. Finally, we demonstrate synchronization numerically in the radial isochron clock model and show that the latter Lyapinov exponent is more accurate

    Stochastic synchronization of neuronal populations with intrinsic and extrinsic noise

    Get PDF
    We extend the theory of noise-induced phase synchronization to the case of a neural master equation describing the stochastic dynamics of an ensemble of uncoupled neuronal population oscillators with intrinsic and extrinsic noise. The master equation formulation of stochastic neurodynamics represents the state of each population by the number of currently active neurons, and the state transitions are chosen so that deterministic Wilson-Cowan rate equations are recovered in the mean-field limit. We apply phase reduction and averaging methods to a corresponding Langevin approximation of the master equation in order to determine how intrinsic noise disrupts synchronization of the population oscillators driven by a common extrinsic noise source. We illustrate our analysis by considering one of the simplest networks known to generate limit cycle oscillations at the population level, namely, a pair of mutually coupled excitatory (E) and inhibitory (I) subpopulations. We show how the combination of intrinsic independent noise and extrinsic common noise can lead to clustering of the population oscillators due to the multiplicative nature of both noise sources under the Langevin approximation. Finally, we show how a similar analysis can be carried out for another simple population model that exhibits limit cycle oscillations in the deterministic limit, namely, a recurrent excitatory network with synaptic depression; inclusion of synaptic depression into the neural master equation now generates a stochastic hybrid system

    Stochastic Synchrony and Phase Resetting Curves: Theory and Applications.

    Get PDF
    In this thesis, our goal was to study the phase synchronization between two uncoupled oscillators receiving partially correlated input. Using perturbation methods we obtain a closed-form solution for the steady-state density of phase differences between the two oscillators. Order parameters for synchrony and cross-correlation are used to quantify the degree of stochastic synchronization. We show that oscillators proscribed with Type-II phase resetting curves (PRC's) are more prone to stochastic synchronization compared to Type-I PRCs, and that the synchrony in the system can be described by a closed-form expression for the probability distribution of phase differences between the two uncoupled oscillators. We also study Morris-Lecar, leaky integrate-and-fire model and the Wang-Buzsaki interneuron model. Motivated by our theoretical developments, we study synchronization in simple neuronal network models of the olfactory bulb by applying the results from the theoretical studies to spiking neuron models with feedback to qualitatively demonstrate the emergence of self-organized synchrony. Here we again use an abstract model to obtain an expression for the averaged dynamics and compare our predicted solutions using Monte-Carlo simulations. We also show that an arbitrary mechanism that has a finite time memory of correlated inputs can cause bistability in such a system. Furthermore, we investigated the rate at which such systems approach their steady-state distribution and show that the dependence of the rate on the shape of the PRC. We obtained an expression for the rate of convergence to the steady-state density of phase differences in a two oscillators system receiving partially correlated inputs without feedback. To this end, we study the closed-form expression to obtain an approximation using a perturbation technique suited for computing large eigenvalues. It is shown that Type-II PRC's converge to their steady-state density compared to Type-I PRC's and that the rate of convergence is dependent on the input correlation. Our theoretical and numerical studies suggest a potential mechanism by which asynchronous inhibtion may promote and amplify synchronization in systems where the individual actors can be described as general oscillators at least in certain regimes of their activity with a possible source of activity dependent and partially correlated feedback

    Noise enhanced coupling between two oscillators with long-term plasticity

    Get PDF
    Spike time-dependent plasticity is a fundamental adaptation mechanism of the nervous system. It induces structural changes of synaptic connectivity by regulation of coupling strengths between individual cells depending on their spiking behavior. As a biophysical process its functioning is constantly subjected to natural fluctuations. We study theoretically the influence of noise on a microscopic level by considering only two coupled neurons. Adopting a phase description for the neurons we derive a two-dimensional system which describes the averaged dynamics of the coupling strengths. We show that a multistability of several coupling configurations is possible, where some configurations are not found in systems without noise. Intriguingly, it is possible that a strong bidirectional coupling, which is not present in the noise-free situation, can be stabilized by the noise. This means that increased noise, which is normally expected to desynchronize the neurons, can be the reason for an antagonistic response of the system, which organizes itself into a state of stronger coupling and counteracts the impact of noise. This mechanism, as well as a high potential for multistability, is also demonstrated numerically for a coupled pair of Hodgkin-Huxley neurons

    Effects of Spike-Driven Feedback on Neural Gain and Pairwise Correlation

    Get PDF
    Both single neuron and neural population spiking statistics, such as firing rate or temporal patterning, are critical aspects of manyneural codes. Tremendous experimental and theoretical effort has been devoted to understanding how nonlinear membrane dynamics and ambient synaptic activity determine the gain of single neuron firing rate responses. Furthermore, there is increasing experimental evidence that the same manipulationsthat affect firing rate gain also modulate the pairwise correlationbetween neurons. However, there is little understanding of the mechanistic links between rate and correlation modulation. In this thesis, we explore how spike-driven intrinsicfeedback co-modulates firing rate gain and spike traincorrelation. Throughout our study, we focus on excitable LIF neurons subject to Gaussian white noise fluctuations. We first review prior work which develops linear response theory for studying spectral properties of LIF neurons. This theory is used to capture the influence of weak spike driven feedback in single neuron responses. We introduce a concept of "dynamic spike count gain" and study how this property is affected by intrinsic feedback, comparing theoretical results to simulations of stochastic ODE models. We then expand our scope to a pair of such neurons receiving weakly correlated noisy inputs. Extending previous work, we study the correlation between the spike trains of these neurons, comparing theoretical and simulation results. We observe that firing rate gain modulation from feedback is largely time-scale invariant, while correlation modulation exhibits marked temporal dependence. To discern whether these effects can be solely attributed to firing rate changes, we perform a perturbative analysis to derive conditions for correlation modulation over small time scales beyond that expected from rate modulation. We find that correlation is not purely a function of firing rate change; rather it is also influenced by sufficiently fast feedback inputs. These results offer a glimpse into the connections between gain and correlation, indicating that attempts to manipulate either property via firing rates will affect both, and that achievability of modulation targets is constrained by the time scale of spike feedback
    corecore