1,223 research outputs found

    Neurobiological Models of Two-Choice Decision Making Can Be Reduced to a One-Dimensional Nonlinear Diffusion Equation

    Get PDF
    The response behaviors in many two-alternative choice tasks are well described by so-called sequential sampling models. In these models, the evidence for each one of the two alternatives accumulates over time until it reaches a threshold, at which point a response is made. At the neurophysiological level, single neuron data recorded while monkeys are engaged in two-alternative choice tasks are well described by winner-take-all network models in which the two choices are represented in the firing rates of separate populations of neurons. Here, we show that such nonlinear network models can generally be reduced to a one-dimensional nonlinear diffusion equation, which bears functional resemblance to standard sequential sampling models of behavior. This reduction gives the functional dependence of performance and reaction-times on external inputs in the original system, irrespective of the system details. What is more, the nonlinear diffusion equation can provide excellent fits to behavioral data from two-choice decision making tasks by varying these external inputs. This suggests that changes in behavior under various experimental conditions, e.g. changes in stimulus coherence or response deadline, are driven by internal modulation of afferent inputs to putative decision making circuits in the brain. For certain model systems one can analytically derive the nonlinear diffusion equation, thereby mapping the original system parameters onto the diffusion equation coefficients. Here, we illustrate this with three model systems including coupled rate equations and a network of spiking neurons

    Time-varying perturbations can distinguish among integrate-to-threshold models for perceptualdecision making in reaction time tasks

    Get PDF
    Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varying perturbations during integration may distinguish among some of these models. Here, we present computer simulations and mathematical proofs that provide more rigorous comparisons among one-dimensional stochastic differential equation models. Using two perturbation protocols and focusing on the resulting changes in the means and standard deviations of decision times, we show that, for high signal-to-noise ratios, drift-diffusion models with constant and time-varying drift rates can be distinguished from Ornstein-Uhlenbeck processes, but not necessarily from each other. The protocols can also distinguish stable from unstable Ornstein-Uhlenbeck processes, and we show that a nonlinear integrator can be distinguished from these linear models by changes in standard deviations. The protocols can be implemented in behavioral experiments.Comment: 32 pages, 9 figures, 3 tables, accepted for publication in Neural Computatio

    Nonparametric Bayesian methods for one-dimensional diffusion models

    Full text link
    In this paper we review recently developed methods for nonparametric Bayesian inference for one-dimensional diffusion models. We discuss different possible prior distributions, computational issues, and asymptotic results

    One dimensional Fokker-Planck reduced dynamics of decision making models in Computational Neuroscience

    Full text link
    We study a Fokker-Planck equation modelling the firing rates of two interacting populations of neurons. This model arises in computational neuroscience when considering, for example, bistable visual perception problems and is based on a stochastic Wilson-Cowan system of differential equations. In a previous work, the slow-fast behavior of the solution of the Fokker-Planck equation has been highlighted. Our aim is to demonstrate that the complexity of the model can be drastically reduced using this slow-fast structure. In fact, we can derive a one-dimensional Fokker-Planck equation that describes the evolution of the solution along the so-called slow manifold. This permits to have a direct efficient determination of the equilibrium state and its effective potential, and thus to investigate its dependencies with respect to various parameters of the model. It also allows to obtain information about the time escaping behavior. The results obtained for the reduced 1D equation are validated with those of the original 2D equation both for equilibrium and transient behavior

    Identifying the neural mechanisms of perceptual decision-making in a rat auditory task

    Get PDF
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2020-2021. Directors: Jaime de la Rocha and Genís Prat. Tutor: Agustín Gutiérrez GálvezPerceptual decision-making involves accumulation of sensory information over time. The classical view indicates that sensory neurons first transform the physical stimulus into evidence, and then the decision making areas of the brain use this evidence to make a categorical choice. Here, we show that this process is not optimal. In general, rats do not equally weigh the stimulus and they underweight extreme stimuli values. These results indicate that there must be a non-linearity in the stimulus transformation into evidence or that the accumulation of evidence is not perfect. To differentiate which are the underlying mechanisms that are causing these behaviours, we fitted an extended neurobiological model which was composed of a stimulus to evidence and a decision module. Using model comparison methods, we were able to identify that a non-linear stimulus to evidence transformation is relevant to explain the psychophysical kernels of the experimental task, but that time adaptation of the sensory neurons does not have an impact in this case. We also found that, at least for a subset of rats, the non-linear dynamics in the decision module explains the experimental data better than a linear perfect integration model. Our work shows that model fitting is a powerful tool to investigate different brain mechanisms and that non-linear dynamics could be important not only during the accumulation of evidence but also during the stimulus to evidence transformation

    The Leaky Integrating Threshold and its impact on evidence accumulation models of choice RT

    Get PDF
    A common assumption in choice response time (RT) modeling is that after evidence accumulation reaches a certain decision threshold, the choice is categorically communicated to the motor system that then executes the response. However, neurophysiological findings suggest that motor preparation partly overlaps with evidence accumulation, and is not independent from stimulus difficulty level. We propose to model this entanglement by changing the nature of the decision criterion from a simple threshold to an actual process. More specifically, we propose a secondary, motor preparation related, leaky accumulation process that takes the accumulated evidence of the original decision process as a continuous input, and triggers the actual response when it reaches its own threshold. We analytically develop this Leaky Integrating Threshold (LIT), applying it to a simple constant drift diffusion model, and show how its parameters can be estimated with the D*M method. Reanalyzing 3 different data sets, the LIT extension is shown to outperform a standard drift diffusion model using multiple statistical approaches. Further, the LIT leak parameter is shown to be better at explaining the speed/accuracy trade-off manipulation than the commonly used boundary separation parameter. These improvements can also be verified using traditional diffusion model analyses, for which the LIT predicts the violation of several common selective parameter influence assumptions. These predictions are consistent with what is found in the data and with what is reported experimentally in the literature. Crucially, this work offers a new benchmark against which to compare neural data to offer neurobiological validation for the proposed processes

    Generative Models of Cortical Oscillations: Neurobiological Implications of the Kuramoto Model

    Get PDF
    Understanding the fundamental mechanisms governing fluctuating oscillations in large-scale cortical circuits is a crucial prelude to a proper knowledge of their role in both adaptive and pathological cortical processes. Neuroscience research in this area has much to gain from understanding the Kuramoto model, a mathematical model that speaks to the very nature of coupled oscillating processes, and which has elucidated the core mechanisms of a range of biological and physical phenomena. In this paper, we provide a brief introduction to the Kuramoto model in its original, rather abstract, form and then focus on modifications that increase its neurobiological plausibility by incorporating topological properties of local cortical connectivity. The extended model elicits elaborate spatial patterns of synchronous oscillations that exhibit persistent dynamical instabilities reminiscent of cortical activity. We review how the Kuramoto model may be recast from an ordinary differential equation to a population level description using the nonlinear Fokker–Planck equation. We argue that such formulations are able to provide a mechanistic and unifying explanation of oscillatory phenomena in the human cortex, such as fluctuating beta oscillations, and their relationship to basic computational processes including multistability, criticality, and information capacity

    Synchronization and Redundancy: Implications for Robustness of Neural Learning and Decision Making

    Full text link
    Learning and decision making in the brain are key processes critical to survival, and yet are processes implemented by non-ideal biological building blocks which can impose significant error. We explore quantitatively how the brain might cope with this inherent source of error by taking advantage of two ubiquitous mechanisms, redundancy and synchronization. In particular we consider a neural process whose goal is to learn a decision function by implementing a nonlinear gradient dynamics. The dynamics, however, are assumed to be corrupted by perturbations modeling the error which might be incurred due to limitations of the biology, intrinsic neuronal noise, and imperfect measurements. We show that error, and the associated uncertainty surrounding a learned solution, can be controlled in large part by trading off synchronization strength among multiple redundant neural systems against the noise amplitude. The impact of the coupling between such redundant systems is quantified by the spectrum of the network Laplacian, and we discuss the role of network topology in synchronization and in reducing the effect of noise. A range of situations in which the mechanisms we model arise in brain science are discussed, and we draw attention to experimental evidence suggesting that cortical circuits capable of implementing the computations of interest here can be found on several scales. Finally, simulations comparing theoretical bounds to the relevant empirical quantities show that the theoretical estimates we derive can be tight.Comment: Preprint, accepted for publication in Neural Computatio

    Effective Reduced Diffusion-Models: A Data Driven Approach to the Analysis of Neuronal Dynamics

    Get PDF
    We introduce in this paper a new method for reducing neurodynamical data to an effective diffusion equation, either experimentally or using simulations of biophysically detailed models. The dimensionality of the data is first reduced to the first principal component, and then fitted by the stationary solution of a mean-field-like one-dimensional Langevin equation, which describes the motion of a Brownian particle in a potential. The advantage of such description is that the stationary probability density of the dynamical variable can be easily derived. We applied this method to the analysis of cortical network dynamics during up and down states in an anesthetized animal. During deep anesthesia, intracellularly recorded up and down states transitions occurred with high regularity and could not be adequately described by a one-dimensional diffusion equation. Under lighter anesthesia, however, the distributions of the times spent in the up and down states were better fitted by such a model, suggesting a role for noise in determining the time spent in a particular state
    • …
    corecore