74,299 research outputs found

    Coarse-grained dynamics of an activity bump in a neural field model

    Full text link
    We study a stochastic nonlocal PDE, arising in the context of modelling spatially distributed neural activity, which is capable of sustaining stationary and moving spatially-localized ``activity bumps''. This system is known to undergo a pitchfork bifurcation in bump speed as a parameter (the strength of adaptation) is changed; yet increasing the noise intensity effectively slowed the motion of the bump. Here we revisit the system from the point of view of describing the high-dimensional stochastic dynamics in terms of the effective dynamics of a single scalar "coarse" variable. We show that such a reduced description in the form of an effective Langevin equation characterized by a double-well potential is quantitatively successful. The effective potential can be extracted using short, appropriately-initialized bursts of direct simulation. We demonstrate this approach in terms of (a) an experience-based "intelligent" choice of the coarse observable and (b) an observable obtained through data-mining direct simulation results, using a diffusion map approach.Comment: Corrected aknowledgement

    Stochastic models of evidence accumulation in changing environments

    Get PDF
    Organisms and ecological groups accumulate evidence to make decisions. Classic experiments and theoretical studies have explored this process when the correct choice is fixed during each trial. However, we live in a constantly changing world. What effect does such impermanence have on classical results about decision making? To address this question we use sequential analysis to derive a tractable model of evidence accumulation when the correct option changes in time. Our analysis shows that ideal observers discount prior evidence at a rate determined by the volatility of the environment, and the dynamics of evidence accumulation is governed by the information gained over an average environmental epoch. A plausible neural implementation of an optimal observer in a changing environment shows that, in contrast to previous models, neural populations representing alternate choices are coupled through excitation. Our work builds a bridge between statistical decision making in volatile environments and stochastic nonlinear dynamics.Comment: 26 pages, 7 figure

    Similarity Effect and Optimal Control of Multiple-Choice Decision Making

    Get PDF
    SummaryDecision making with several choice options is central to cognition. To elucidate the neural mechanisms of such decisions, we investigated a recurrent cortical circuit model in which fluctuating spiking neural dynamics underlie trial-by-trial stochastic decisions. The model encodes a continuous analog stimulus feature and is thus applicable to multiple-choice decisions. Importantly, the continuous network captures similarity between alternatives and possible overlaps in their neural representation. Model simulations accounted for behavioral as well as single-unit neurophysiological data from a recent monkey experiment and revealed testable predictions about the patterns of error rate as a function of the similarity between the correct and actual choices. We also found that the similarity and number of options affect speed and accuracy of responses. A mechanism is proposed for flexible control of speed-accuracy tradeoff, based on a simple top-down signal to the decision circuit that may vary nonmonotonically with the number of choice alternatives

    Distinct Sources of Deterministic and Stochastic Components of Action Timing Decisions in Rodent Frontal Cortex

    Get PDF
    The selection and timing of actions are subject to determinate influences such as sensory cues and internal state as well as to effectively stochastic variability. Although stochastic choice mechanisms are assumed by many theoretical models, their origin and mechanisms remain poorly understood. Here we investigated this issue by studying how neural circuits in the frontal cortex determine action timing in rats performing a waiting task. Electrophysiological recordings from two regions necessary for this behavior, medial prefrontal cortex (mPFC) and secondary motor cortex (M2), revealed an unexpected functional dissociation. Both areas encoded deterministic biases in action timing, but only M2 neurons reflected stochastic trial-by-trial fluctuations. This differential coding was reflected in distinct timescales of neural dynamics in the two frontal cortical areas. These results suggest a two-stage model in which stochastic components of action timing decisions are injected by circuits downstream of those carrying deterministic bias signals.info:eu-repo/semantics/publishedVersio

    Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    Get PDF
    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons

    Measuring edge importance: a quantitative analysis of the stochastic shielding approximation for random processes on graphs

    Get PDF
    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Gal\'{a}n recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion channel models, such as the Hodgkin-Huxley or other conductance based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Gal\'{a}n's approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process.Comment: Added one reference, typos corrected in Equation 6 and Appendix C, added the assumption that the graph is irreducible to the main theorem (results unchanged

    Mean-field equations for stochastic firing-rate neural fields with delays: Derivation and noise-induced transitions

    Full text link
    In this manuscript we analyze the collective behavior of mean-field limits of large-scale, spatially extended stochastic neuronal networks with delays. Rigorously, the asymptotic regime of such systems is characterized by a very intricate stochastic delayed integro-differential McKean-Vlasov equation that remain impenetrable, leaving the stochastic collective dynamics of such networks poorly understood. In order to study these macroscopic dynamics, we analyze networks of firing-rate neurons, i.e. with linear intrinsic dynamics and sigmoidal interactions. In that case, we prove that the solution of the mean-field equation is Gaussian, hence characterized by its two first moments, and that these two quantities satisfy a set of coupled delayed integro-differential equations. These equations are similar to usual neural field equations, and incorporate noise levels as a parameter, allowing analysis of noise-induced transitions. We identify through bifurcation analysis several qualitative transitions due to noise in the mean-field limit. In particular, stabilization of spatially homogeneous solutions, synchronized oscillations, bumps, chaotic dynamics, wave or bump splitting are exhibited and arise from static or dynamic Turing-Hopf bifurcations. These surprising phenomena allow further exploring the role of noise in the nervous system.Comment: Updated to the latest version published, and clarified the dependence in space of Brownian motion

    Integration of continuous-time dynamics in a spiking neural network simulator

    Full text link
    Contemporary modeling approaches to the dynamics of neural networks consider two main classes of models: biologically grounded spiking neurons and functionally inspired rate-based units. The unified simulation framework presented here supports the combination of the two for multi-scale modeling approaches, the quantitative validation of mean-field approaches by spiking network simulations, and an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most efficient spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. We further demonstrate the broad applicability of the framework by considering various examples from the literature ranging from random networks to neural field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation
    • …
    corecore