18 research outputs found

    Probabilistic biases meet the Bayesian brain

    Get PDF
    Bayesian cognitive science sees the mind as a spectacular probabilistic inference machine. But Judgment and Decision Making research has spent half a century uncovering how dramatically and systematically people depart from rational norms. This paper outlines recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities; but approximates probabilistic calculations through drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, and offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment

    The autocorrelated Bayesian sampler : a rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times

    Get PDF
    Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information. But we assume that the brain has no direct access to this posterior, but can only sample hypotheses according to their posterior probabilities. Accordingly, we argue that the primary problem of normative concern in decision-making is integrating stochastic hypotheses, rather than stochastic sensory information, to make categorical decisions. This implies that human response variability arises mainly from posterior sampling rather than sensory noise. Because human hypothesis generation is serially correlated, hypothesis samples will be autocorrelated. Guided by this new problem formulation, we develop a new process, the Autocorrelated Bayesian Sampler (ABS), which grounds autocorrelated hypothesis generation in a sophisticated sampling algorithm. The ABS provides a single mechanism that qualitatively explains many empirical effects of probability judgments, estimates, confidence intervals, choice, confidence judgments, response times, and their relationships. Our analysis demonstrates the unifying power of a perspective shift in the exploration of normative models. It also exemplifies the proposal that the “Bayesian brain” operates using samples not probabilities, and that variability in human behavior may primarily reflect computational rather than sensory noise

    The Neglected Importance of Auxiliary Assumptions when Applying Probability Theory

    No full text
    Although probability theory defines how probability measures are expressed and used, it is agnostic to where these measures comes from. To apply probability theory, one must therefore make a number of auxiliary assumptions regarding assignment and interpretation of probability. In this paper I demonstrate that these assumptions can lead to radically different conclusions that are nevertheless mathematically and philosophically coherent. I further argue that behavioral science, as a rule, does not take these assumptions into consideration, but rather conflate the conceptual interpretation of probability with its mathematical application. This creates the implicit assumption that there is one, and only one, way of correctly applying probability theory to any given situation, when in reality probability theory can usually be applied in a number of different ways, all equally correct from both a mathematical and a philosophical perspective. In order for behavioral science to progress, these auxiliary assumptions must be taken into consideration, as well as be the subject of research in their own right

    The Cognitive Basis of Joint Probability Judgments : Processes, Ecology, and Adaption

    No full text
    When navigating an uncertain world, it is often necessary to judge the probability of a conjunction of events, that is, their joint probability. The subject of this thesis is how people infer joint probabilities from probabilities of individual events. Study I explored such joint probability judgment tasks in conditions with independent events and conditions with systematic risk that could be inferred through feedback. Results indicated that participants tended to approach the tasks using additive combinations of the individual probabilities, but switch to multiplication (or, to a lesser extent, exemplar memory) when events were independent and additive strategies therefore were less accurate. Consequently, participants were initially more accurate in the task with high systematic risk, despite that task being more complex from the perspective of probability theory. Study II simulated the performance of models of joint probability judgment in tasks based both on computer generated data and real-world data-sets, to evaluate which cognitive processes are accurate in which ecological contexts. Models used in Study I and other models inspired by current research were explored. The results confirmed that, by virtue of their robustness, additive models are reasonable general purpose algorithms, although when one is familiar with the task it is preferable to switch to other strategies more specifically adapted to the task. After Study I found that people adapt strategy choice according to dependence between events and Study II confirmed that these adaptions are justified in terms of accuracy, Study III investigated whether adapting to stochastic dependence implied thinking according to stochastic principles. Results indicated that this was not the case, but that participants instead worked according to the weak assumption that events were independent, regardless of the actual state of the world. In conclusion, this thesis demonstrates that people generally do not combine individual probabilities into joint probability judgments in ways consistent with the basic principles of probability theory or think of the task in such terms, but neither does there appear to be much reason to do so. Rather, simpler heuristics can often approximate equally or more accurate judgments

    The Cognitive Basis of Joint Probability Judgments : Processes, Ecology, and Adaption

    No full text
    When navigating an uncertain world, it is often necessary to judge the probability of a conjunction of events, that is, their joint probability. The subject of this thesis is how people infer joint probabilities from probabilities of individual events. Study I explored such joint probability judgment tasks in conditions with independent events and conditions with systematic risk that could be inferred through feedback. Results indicated that participants tended to approach the tasks using additive combinations of the individual probabilities, but switch to multiplication (or, to a lesser extent, exemplar memory) when events were independent and additive strategies therefore were less accurate. Consequently, participants were initially more accurate in the task with high systematic risk, despite that task being more complex from the perspective of probability theory. Study II simulated the performance of models of joint probability judgment in tasks based both on computer generated data and real-world data-sets, to evaluate which cognitive processes are accurate in which ecological contexts. Models used in Study I and other models inspired by current research were explored. The results confirmed that, by virtue of their robustness, additive models are reasonable general purpose algorithms, although when one is familiar with the task it is preferable to switch to other strategies more specifically adapted to the task. After Study I found that people adapt strategy choice according to dependence between events and Study II confirmed that these adaptions are justified in terms of accuracy, Study III investigated whether adapting to stochastic dependence implied thinking according to stochastic principles. Results indicated that this was not the case, but that participants instead worked according to the weak assumption that events were independent, regardless of the actual state of the world. In conclusion, this thesis demonstrates that people generally do not combine individual probabilities into joint probability judgments in ways consistent with the basic principles of probability theory or think of the task in such terms, but neither does there appear to be much reason to do so. Rather, simpler heuristics can often approximate equally or more accurate judgments

    Precise/not precise (PNP) : A Brunswikian model that uses judgment error distributions to identify cognitive processes

    No full text
    In 1956, Brunswik proposed a definition of what he called intuitive and analytic cognitive processes, not in terms of verbally specified properties, but operationally based on the observable error distributions. In the decades since, the diagnostic value of error distributions has generally been overlooked, arguably because of a long tradition to consider the error as exogenous (and irrelevant) to the process. Based on Brunswik’s ideas, we develop the precise/not precise (PNP) model, using a mixture distribution to model the proportion of error-perturbed versus error-free executions of an algorithm, to determine if Brunswik’s claims can be replicated and extended. In Experiment 1, we demonstrate that the PNP model recovers Brunswik’s distinction between perceptual and conceptual tasks. In Experiment 2, we show that also in symbolic tasks that involve no perceptual noise, the PNP model identifies both types of processes based on the error distributions. In Experiment 3, we apply the PNP model to confirm the often-assumed “quasi-rational” nature of the rule-based processes involved in multiple-cue judgment. The results demonstrate that the PNP model reliably identifies the two cognitive processes proposed by Brunswik, and often recovers the parameters of the process more effectively than a standard regression model with homogeneous Gaussian error, suggesting that the standard Gaussian assumption incorrectly specifies the error distribution in many tasks. We discuss the untapped potentials of using error distributions to identify cognitive processes and how the PNP model relates to, and can enlighten, debates on intuition and analysis in dual-systems theories

    A unified explanation of variability and bias in human probability judgments: How computational noise explains the mean-variance signature

    No full text
    Human probability judgments are both variable and subject to systematic biases. Most probability judgment models treat variability and bias separately: a deterministic model explains the origin of bias, to which a noise process is added to generate variability. But these accounts do not explain the characteristic inverse U-shaped signature linking mean and variance in probability judgments. By contrast, models based on sampling generate the mean and variance of judgments in a unified way: the variability in the response is an inevitable consequence of basing probability judgments on a small sample of remembered or simulated instances of events. We consider two recent sampling models, in which biases are explained either by the sample accumulation being further corrupted by retrieval noise (the Probability Theory + Noise account), or as a Bayesian adjustment to the uncertainty implicit in small samples (the Bayesian sampler). While the mean predictions of these accounts closely mimic one another, they differ regarding the predicted relationship between mean and variance. We show that these models can be distinguished by a novel linear regression method that analyses this crucial mean-variance signature. First, the efficacy of the method is established using model recovery, demonstrating that it more accurately recovers parameters than complex approaches. Second, the method is applied to the mean and variance of both existing and new probability judgment data, confirming that judgments are based on a small number of samples that are adjusted by a prior, as predicted by the Bayesian sampler
    corecore