14 research outputs found

    Types of approximation for probabilistic cognition : sampling and variational

    Get PDF
    A basic challenge for probabilistic models of cognition is explaining how probabilistically correct solutions are approximated by the limited brain, and how to explain mismatches with human behavior. An emerging approach to solving this problem is to use the same approximation algorithms that were been developed in computer science and statistics for working with complex probabilistic models. Two types of approximation algorithms have been used for this purpose: sampling algorithms, such as importance sampling and Markov chain Monte Carlo, and variational algorithms, such as mean-field approximations and assumed density filtering. Here I briefly review this work, outlining how the algorithms work, how they can explain behavioral biases, and how they might be implemented in the brain. There are characteristic differences between how these two types of approximation are applied in brain and behavior, which points to how they could be combined in future research

    Beyond single-level accounts: the role of cognitive architectures in cognitive scientific explanation

    Get PDF
    We consider approaches to explanation within the cognitive sciences that begin with Marr’s computational level (e.g., purely Bayesian accounts of cognitive phenomena) or Marr’s implementational level (e.g., reductionist accounts of cognitive phenomena based only on neural level evidence) and argue that each is subject to fundamental limitations which impair their ability to provide adequate explanations of cognitive phenomena. For this reason, it is argued, explanation cannot proceed at either level without tight coupling to the algorithmic and representation level. Even at this level, however, we argue that additional constraints relating to the decomposition of the cognitive system into a set of interacting subfunctions (i.e., a cognitive architecture) are required. Integrated cognitive architectures that permit abstract specification of the functions of components and that make contact with the neural level provide a powerful bridge for linking the algorithmic and representational level to both the computational level and the implementational level

    Clinical Applications of Stochastic Dynamic Models of the Brain, Part I: A Primer

    Get PDF
    Biological phenomena arise through interactions between an organism's intrinsic dynamics and stochastic forces-random fluctuations due to external inputs, thermal energy, or other exogenous influences. Dynamic processes in the brain derive from neurophysiology and anatomical connectivity; stochastic effects arise through sensory fluctuations, brainstem discharges, and random microscopic states such as thermal noise. The dynamic evolution of systems composed of both dynamic and random effects can be studied with stochastic dynamic models (SDMs). This article, Part I of a two-part series, offers a primer of SDMs and their application to large-scale neural systems in health and disease. The companion article, Part II, reviews the application of SDMs to brain disorders. SDMs generate a distribution of dynamic states, which (we argue) represent ideal candidates for modeling how the brain represents states of the world. When augmented with variational methods for model inversion, SDMs represent a powerful means of inferring neuronal dynamics from functional neuroimaging data in health and disease. Together with deeper theoretical considerations, this work suggests that SDMs will play a unique and influential role in computational psychiatry, unifying empirical observations with models of perception and behavior

    Redundancy in synaptic connections enables neurons to learn optimally

    Get PDF
    Recent experimental studies suggest that, in cortical microcircuits of the mammalian brain, the majority of neuron-to-neuron connections are realized by multiple synapses. However, it is not known whether such redundant synaptic connections provide any functional benefit. Here, we show that redundant synaptic connections enable near-optimal learning in cooperation with synaptic rewiring. By constructing a simple dendritic neuron model, we demonstrate that with multisynaptic connections synaptic plasticity approximates a sample-based Bayesian filtering algorithm known as particle filtering, and wiring plasticity implements its resampling process. Extending the proposed framework to a detailed single-neuron model of perceptual learning in the primary visual cortex, we show that the model accounts for many experimental observations. In particular, the proposed model reproduces the dendritic position dependence of spike-timing-dependent plasticity and the functional synaptic organization on the dendritic tree based on the stimulus selectivity of presynaptic neurons. Our study provides a conceptual framework for synaptic plasticity and rewiring

    Where do hypotheses come from?

    Get PDF
    Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available result in close to rational inference over the hypothesis space, whereas tasks requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior. While this approximation will converge to the true posterior in the limit of infinite samples, we take a small number of samples as we expect that the number of samples humans take is limited by time pressure and cognitive resource constraints. We show that this model recreates several well-documented experimental findings such as anchoring and adjustment, subadditivity, superadditivity, the crowd within as well as the self-generation effect, the weak evidence, and the dud alternative effects. Additionally, we confirm the model's prediction that superadditivity and subadditivity can be induced within the same paradigm by manipulating the unpacking and typicality of hypotheses, in 2 experiments.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216

    Hierarchical and Nonlinear Dynamics in Prefrontal Cortex Regulate the Precision of Perceptual Beliefs

    Get PDF
    Actions are shaped not only by the content of our percepts but also by our confidence in them. To study the cortical representation of perceptual precision in decision making, we acquired functional imaging data whilst participants performed two vibrotactile forced-choice discrimination tasks: a fast-slow judgment, and a same-different judgment. The first task requires a comparison of the perceived vibrotactile frequencies to decide which one is faster. However, the second task requires that the estimated difference between those frequencies is weighed against the precision of each percept—if both stimuli are very precisely perceived, then any slight difference is more likely to be identified than if the percepts are uncertain. We additionally presented either pure sinusoidal or temporally degraded “noisy” stimuli, whose frequency/period differed slightly from cycle to cycle. In this way, we were able to manipulate the perceptual precision. We report a constellation of cortical regions in the rostral prefrontal cortex (PFC), dorsolateral PFC (DLPFC) and superior frontal gyrus (SFG) associated with the perception of stimulus difference, the presence of stimulus noise and the interaction between these factors. Dynamic causal modeling (DCM) of these data suggested a nonlinear, hierarchical model, whereby activity in the rostral PFC (evoked by the presence of stimulus noise) mutually interacts with activity in the DLPFC (evoked by stimulus differences). This model of effective connectivity outperformed competing models with serial and parallel interactions, hence providing a unique insight into the hierarchical architecture underlying the representation and appraisal of perceptual belief and precision in the PFC

    Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Get PDF
    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons

    Neural Plausibility of Bayesian Inference

    Get PDF
    Behavioral studies have shown that humans account for uncertainty in a way that is nearly optimal in the Bayesian sense. Probabilistic models based on Bayes' theorem have been widely used for understanding human cognition, and have been applied to behaviors that range from perception and motor control to higher level reasoning and inference. However, whether the brain actually uses Bayesian reasoning or such reasoning is just an approximate description of human behavior is an open question. In this thesis, I aim to address this question by exploring the neural plausibility of Bayesian inference. I first present a spiking neural model for learning priors (beliefs) from experiences of the natural world. Through this model, I address the question of how humans might be learning the priors needed for the inferences they make in their daily lives. I propose neural mechanisms for continuous learning and updating of priors - cognitive processes that are critical for many aspects of higher-level cognition. Next, I propose neural mechanisms for performing Bayesian inference by combining the learned prior with the likelihood that is based on the observed information. Through the process of building these models, I address the issue of representing probability distributions in neural populations by deploying an efficient neural coding scheme. I show how these representations can be used in meaningful ways to learn beliefs (priors) over time and to perform inference using those beliefs. The final model is generalizable to various psychological tasks, and I show that it converges to the near optimal priors with very few training examples. The model is validated using a life span inference task, and the results from the model match human performance on this task better than an ideal Bayesian model due to the use of neuron tuning curves. This provides an initial step towards better understanding how Bayesian computations may be implemented in a biologically plausible neural network. Finally, I discuss the limitations and suggest future work on both theoretical and experimental fronts
    corecore