225 research outputs found

    The Bayesian sampler : generic Bayesian inference causes incoherence in human probability

    Get PDF
    Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample

    Types of approximation for probabilistic cognition : sampling and variational

    Get PDF
    A basic challenge for probabilistic models of cognition is explaining how probabilistically correct solutions are approximated by the limited brain, and how to explain mismatches with human behavior. An emerging approach to solving this problem is to use the same approximation algorithms that were been developed in computer science and statistics for working with complex probabilistic models. Two types of approximation algorithms have been used for this purpose: sampling algorithms, such as importance sampling and Markov chain Monte Carlo, and variational algorithms, such as mean-field approximations and assumed density filtering. Here I briefly review this work, outlining how the algorithms work, how they can explain behavioral biases, and how they might be implemented in the brain. There are characteristic differences between how these two types of approximation are applied in brain and behavior, which points to how they could be combined in future research

    Bayesian brains without probabilities

    Get PDF
    Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy

    The sampling brain

    Get PDF
    Alday, Schlesewsky, and Bornkessel-Schlesewsky [1] provide a stimulating commentary on the issues discussed in our paper [2], highlighting important connections between sampling, Bayesian inference, neural networks, free energy, and basins of attraction. We trace here some relevant history of computational theories of the brain

    What does the mind learn? A comparison of human and machine learning representations

    Get PDF
    We present a brief review of modern machine learning techniques and their use in models of human mental representations, detailing three notable branches: spatial methods, logical methods and artificial neural networks. Each of these branches contain an extensive set of systems, and demonstrate accurate emulations of human learning of categories, concepts and language, despite substantial differences in operation. We suggest that continued applications will allow cognitive researchers the ability to model the complex real-world problems where machine learning has recently been successful, providing more complete behavioural descriptions. This will however also require careful consideration of appropriate algorithmic constraints alongside these methods in order to find a combination which captures both the strengths and weaknesses of human cognition

    Temporal variability in moral value judgement

    Get PDF
    Moral judgments are known to change in response to changes in external conditions. But how variable are moral judgments over time in the absence of environmental variation? The moral domain has been described in terms of five moral foundations, categories that appear to capture moral judgment across cultures. We examined the temporal consistency of repeated responses to the moral foundations questionnaire over short time periods, fitted a set of mixed effects models to the data and compared them. We found correlations between changes in participant responses for different foundations over time, suggesting a structure with at least two underlying stochastic processes: one for moral judgments involving harm and fairness, and another for moral judgments related to loyalty, authority, and purit

    A rational approach to stereotype change

    Get PDF
    Existing theories of stereotype change have often made use of categorisation principles in order to provide qualitative explanations for both the revision and maintenance of stereotypical beliefs. The present paper examines the quantitative methods underlying these explanations, contrasting both rational and heuristic models of stereotype change using participant data and model fits. In a comparison of three models each simulating existing descriptions of stereotype change, both empirical data and model fits suggest that stereotypes are updated using rational categorisation processes. This presents stereotype use as a more rational behaviour than may commonly be assumed, and provides new avenues of encouraging stereotype change according to rational principles

    Cumulative weighing of time in intertemporal tradeoffs

    Get PDF
    We examine preferences for sequences of delayed monetary gains. In the experimental literature, two prominent models have been advanced as psychological descriptions of preferences for sequences. In one model, the instantaneous utilities of the outcomes in a sequence are discounted as a function of their delays, and assembled into a discounted utility of the sequence. In the other model, the ccumulated utility of the outcomes in a sequence is considered along with utility or disutility from improvement in outcome utilities and utility or disutility from the spreading of outcome utilities. Drawing on three threads of evidence concerning preferences for sequences of monetary gains, we propose that the accumulated utility of the outcomes in a sequence is traded off against the duration of utility accumulation. In our first experiment, aggregate choice behavior provides qualitative support for the tradeoff model. In three subsequent experiments, one of which incentivized, disaggregate choice behavior provides quantitative support for the tradeoff model in Bayesian model contests. The third experiment addresses one thread of evidence that motivated the tradeoff model: When, in the choice between two single dated outcomes, it is conveyed that receiving less sooner means receiving nothing later, preference for receiving more later increases, but when it is conveyed that receiving more later means receiving nothing sooner, preference is left unchanged. Our results show that this asymmetric hidden-zero effect is indeed driven by those supporting the tradeoff model. The tradeoff model also accommodates all remaining evidence on preferences for sequences of monetary gains

    Probabilistic biases meet the Bayesian brain

    Get PDF
    Bayesian cognitive science sees the mind as a spectacular probabilistic inference machine. But Judgment and Decision Making research has spent half a century uncovering how dramatically and systematically people depart from rational norms. This paper outlines recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities; but approximates probabilistic calculations through drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, and offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment
    corecore