339 research outputs found

    Rhythmic inhibition allows neural networks to search for maximally consistent states

    Full text link
    Gamma-band rhythmic inhibition is a ubiquitous phenomenon in neural circuits yet its computational role still remains elusive. We show that a model of Gamma-band rhythmic inhibition allows networks of coupled cortical circuit motifs to search for network configurations that best reconcile external inputs with an internal consistency model encoded in the network connectivity. We show that Hebbian plasticity allows the networks to learn the consistency model by example. The search dynamics driven by rhythmic inhibition enable the described networks to solve difficult constraint satisfaction problems without making assumptions about the form of stochastic fluctuations in the network. We show that the search dynamics are well approximated by a stochastic sampling process. We use the described networks to reproduce perceptual multi-stability phenomena with switching times that are a good match to experimental data and show that they provide a general neural framework which can be used to model other 'perceptual inference' phenomena

    Bayesian brains without probabilities

    Get PDF
    Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy

    Global model analysis by parameter space partitioning

    Get PDF
    To model behavior, scientists need to know how models behave. This means learning what other behaviors a model can produce besides the one generated by participants in an experiment. This is a difficult problem because of the complexity of psychological models (e.g., their many parameters) and because the behavioral precision of models (e.g., interval-scale performance) often mismatches their testable precision in experiments, where qualitative, ordinal predictions are the norm. Parameter space partitioning is a solution that evaluates model performance at a qualitative level. There exists a partition on the model’s parameter space that divides it into regions that correspond to each data pattern. Three application examples demonstrate its potential and versatility for studying the global behavior of psychological models.Mark A. Pitt, Woojae Kim, Daniel J. Navarro, and Jay I. Myun

    An application of minimum description length clustering to partitioning learning curves

    Get PDF
    © Copyright 2005 IEEEWe apply a Minimum Description Length–based clustering technique to the problem of partitioning a set of learning curves. The goal is to partition experimental data collected from different sources into groups of sources that are statistically the same.We solve this problem by defining statistical models for the data generating processes, then partitioning them using the Normalized Maximum Likelihood criterion. Unlike many alternative model selection methods, this approach which is optimal (in a minimax coding sense) for data of any sample size. We present an application of the method to the cognitive modeling problem of partitioning of human learning curves for different categorization tasks
    • …
    corecore