951 research outputs found

    Metastable states and quasicycles in a stochastic Wilson-Cowan\ud model of neuronal population dynamics

    Get PDF
    We analyze a stochastic model of neuronal population dynamics with intrinsic noise. In the thermodynamic limit N -> infinity, where N determines the size of each population, the dynamics is described by deterministic Wilson–Cowan equations. On the other hand, for finite N the dynamics is described by a master equation that determines the probability of spiking activity within each population. We first consider a single excitatory population that exhibits bistability in the deterministic limit. The steady–state probability distribution of the stochastic network has maxima at points corresponding to the stable fixed points of the deterministic network; the relative weighting of the two maxima depends on the system size. For large but finite N, we calculate the exponentially small rate of noise–induced transitions between the resulting metastable states using a Wentzel–Kramers–Brillouin (WKB) approximation and matched asymptotic expansions. We then consider a two-population excitatory/inhibitory network that supports limit cycle oscillations. Using a diffusion approximation, we reduce the dynamics to a neural Langevin equation, and show how the intrinsic noise amplifies subthreshold oscillations (quasicycles)

    Stochastic neural field theory and the system-size expansion

    Get PDF
    We analyze a master equation formulation of stochastic neurodynamics for a network of synaptically coupled homogeneous neuronal populations each consisting of N identical neurons. The state of the network is specified by the fraction of active or spiking neurons in each population, and transition rates are chosen so that in the thermodynamic or deterministic limit (N → ∞) we recover standard activity–based or voltage–based rate models. We derive the lowest order corrections to these rate equations for large but finite N using two different approximation schemes, one based on the Van Kampen system-size expansion and the other based on path integral methods. Both methods yield the same series expansion of the moment equations, which at O(1/N ) can be truncated to form a closed system of equations for the first and second order moments. Taking a continuum limit of the moment equations whilst keeping the system size N fixed generates a system of integrodifferential equations for the mean and covariance of the corresponding stochastic neural field model. We also show how the path integral approach can be used to study large deviation or rare event statistics underlying escape from the basin of attraction of a stable fixed point of the mean–field dynamics; such an analysis is not possible using the system-size expansion since the latter cannot accurately\ud determine exponentially small transitions

    Synchronization of stochastic hybrid oscillators driven by a common switching environment

    Get PDF
    Many systems in biology, physics and chemistry can be modeled through ordinary differential equations, which are piecewise smooth, but switch between different states according to a Markov jump process. In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper we suppose that this limit ODE supports a stable limit cycle. We demonstrate that a set of such oscillators can synchronize when they are uncoupled, but they share the same switching Markov jump process. The latter is taken to represent the effect of a common randomly switching environment. We determine the leading order of the Lyapunov coefficient governing the rate of decay of the phase difference in the fast switching limit. The analysis bears some similarities to the classical analysis of synchronization of stochastic oscillators subject to common white noise. However the discrete nature of the Markov jump process raises some difficulties: in fact we find that the Lyapunov coefficient from the quasi-steady-state approximation differs from the Lyapunov coefficient one obtains from a second order perturbation expansion in the waiting time between jumps. Finally, we demonstrate synchronization numerically in the radial isochron clock model and show that the latter Lyapinov exponent is more accurate

    A variational method for analyzing stochastic limit cycle oscillators

    Get PDF
    We introduce a variational method for analyzing limit cycle oscillators in Rd\mathbb{R}^d driven by Gaussian noise. This allows us to derive exact stochastic differential equations (SDEs) for the amplitude and phase of the solution, which are accurate over times over order exp⁥(Cbϔ−1)\exp\big(Cb\epsilon^{-1}\big), where Ï”\epsilon is the amplitude of the noise and bb the magnitude of decay of transverse fluctuations. Within the variational framework, different choices of the amplitude-phase decomposition correspond to different choices of the inner product space Rd\mathbb{R}^d. For concreteness, we take a weighted Euclidean norm, so that the minimization scheme determines the phase by projecting the full solution on to the limit cycle using Floquet vectors. Since there is coupling between the amplitude and phase equations, even in the weak noise limit, there is a small but non-zero probability of a rare event in which the stochastic trajectory makes a large excursion away from a neighborhood of the limit cycle. We use the amplitude and phase equations to bound the probability of it doing this: finding that the typical time the system takes to leave a neighborhood of the oscillator scales as exp⁥(Cbϔ−1)\exp\big(Cb\epsilon^{-1}\big)

    Front propagation in stochastic neural fields

    Get PDF
    We analyse the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive–like displacement (wandering) of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. One major result of our analysis is a comparison between freely propagating fronts and fronts locked to an externally moving stimulus. We show that the latter are much more robust to noise, since the stochastic wandering of the mean front profile is described by an Ornstein–Uhlenbeck process rather than a Wiener process, so that the variance in front position saturates in the long time limit rather than increasing linearly with time. Finally, we consider a stochastic neural field that supports a pulled front in the deterministic limit, and show that the wandering of such a front is now subdiffusive

    Neural field model of binocular rivalry waves

    Get PDF
    We present a neural field model of binocular rivalry waves in visual cortex. For each eye we consider a one–dimensional network of neurons that respond maximally to a particular feature of the corresponding image such as the orientation of a grating stimulus. Recurrent connections within each one-dimensional network are assumed to be excitatory, whereas connections between the two networks are inhibitory (cross-inhibition). Slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We derive an analytical expression for the speed of a binocular rivalry wave as a function of various neurophysiological parameters, and show how properties of the wave are consistent with the wave–like propagation of perceptual dominance observed in recent psychophysical experiments. In addition to providing an analytical framework for studying binocular rivalry waves, we show how neural field methods provide insights into the mechanisms underlying the generation of the waves. In particular, we highlight the important role of slow adaptation in providing a “symmetry breaking mechanism” that allows waves to propagate

    The effects of noise on binocular rivalry waves: a stochastic neural field model

    Get PDF
    We analyse the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how multiplicative noise in the activity variables leads to a diffusive–like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. The multiplicative noise also renormalizes the mean speed of the wave. We use our analysis to calculate the first passage time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation leads to quenched disorder in the neural fields during propagation of a wave

    Random intermittent search and the tug-of-war model of motor-driven transport

    Get PDF
    We formulate the tug-of-war model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor-complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor-complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target somewhere on the microtubule track. Using a quasi-steady state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP)concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model), and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule(asymmetric tug-of-war model)

    A theory for the alignment of cortical feature maps during\ud development

    Get PDF
    We present a developmental model of ocular dominance column formation that takes into account the existence of an array of intrinsically specified cytochrome oxidase blobs. We assume that there is some molecular substrate for the blobs early in development, which generates a spatially periodic modulation of experience–dependent plasticity. We determine the effects of such a modulation on a competitive Hebbian mechanism for the modification of the feedforward afferents from the left and right eyes. We show how alternating left and right eye dominated columns can develop, in which the blobs are aligned with the centers of the ocular dominance columns and receive a greater density of feedforward connections, thus becoming defined extrinsically. More generally, our results suggest that the presence of periodically distributed anatomical markers early in development could provide a mechanism for the alignment of cortical feature maps
    • 

    corecore