3 research outputs found

    Macroscopic coherent structures in a stochastic neural network: from interface dynamics to coarse-grained bifurcation analysis

    Full text link
    We study coarse pattern formation in a cellular automaton modelling a spatially-extended stochastic neural network. The model, originally proposed by Gong and Robinson [36], is known to support stationary and travelling bumps of localised activity. We pose the model on a ring and study the existence and stability of these patterns in various limits using a combination of analytical and numerical techniques. In a purely deterministic version of the model, posed on a continuum, we construct bumps and travelling waves analytically using standard interface methods from neural fields theory. In a stochastic version with Heaviside firing rate, we construct approximate analytical probability mass functions associated with bumps and travelling waves. In the full stochastic model posed on a discrete lattice, where a coarse analytic description is unavailable, we compute patterns and their linear stability using equation-free methods. The lifting procedure used in the coarse time-stepper is informed by the analysis in the deterministic and stochastic limits. In all settings, we identify the synaptic profile as a mesoscopic variable, and the width of the corresponding activity set as a macroscopic variable. Stationary and travelling bumps have similar meso- and macroscopic profiles, but different microscopic structure, hence we propose lifting operators which use microscopic motifs to disambiguate between them. We provide numerical evidence that waves are supported by a combination of high synaptic gain and long refractory times, while meandering bumps are elicited by short refractory times

    Firing rate and spatial correlation in a stochastic neural field model

    Full text link
    This paper studies a stochastic neural field model that is extended from our previous paper [14]. The neural field model consists of many heterogeneous local populations of neurons. Rigorous results on the stochastic stability are proved, which further imply the well-definedness of quantities including mean firing rate and spike count correlation. Then we devote to address two main topics: the comparison with mean-field approximations and the spatial correlation of spike count. We showed that partial synchronization of spiking activities is a main cause for discrepancies of mean-field approximations. Furthermore, the spike count correlation between local populations are studied. We find that the spike count correlation decays quickly with the distance between corresponding local populations. Some mathematical justifications of the mechanism of this phenomenon is also provided.Comment: second draf

    Autonomous learning of nonlocal stochastic neuron dynamics

    Full text link
    Neuronal dynamics is driven by externally imposed or internally generated random excitations/noise, and is often described by systems of stochastic ordinary differential equations. A solution to these equations is the joint probability density function (PDF) of neuron states. It can be used to calculate such information-theoretic quantities as the mutual information between the stochastic stimulus and various internal states of the neuron (e.g., membrane potential), as well as various spiking statistics. When random excitations are modeled as Gaussian white noise, the joint PDF of neuron states satisfies exactly a Fokker-Planck equation. However, most biologically plausible noise sources are correlated (colored). In this case, the resulting PDF equations require a closure approximation. We propose two methods for closing such equations: a modified nonlocal large-eddy-diffusivity closure and a data-driven closure relying on sparse regression to learn relevant features. The closures are tested for stochastic leaky integrate-and-fire (LIF) and FitzHugh-Nagumo (FHN) neurons driven by sine-Wiener noise. Mutual information and total correlation between the random stimulus and the internal states of the neuron are calculated for the FHN neuron.Comment: 26 pages, 12 figures, First author: Tyler E. Maltba, Corresponding author: Daniel M. Tartakovsk
    corecore