25 research outputs found

    Probabilistic biases meet the Bayesian brain

    Get PDF
    Bayesian cognitive science sees the mind as a spectacular probabilistic inference machine. But Judgment and Decision Making research has spent half a century uncovering how dramatically and systematically people depart from rational norms. This paper outlines recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities; but approximates probabilistic calculations through drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, and offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment

    The autocorrelated Bayesian sampler : a rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times

    Get PDF
    Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information. But we assume that the brain has no direct access to this posterior, but can only sample hypotheses according to their posterior probabilities. Accordingly, we argue that the primary problem of normative concern in decision-making is integrating stochastic hypotheses, rather than stochastic sensory information, to make categorical decisions. This implies that human response variability arises mainly from posterior sampling rather than sensory noise. Because human hypothesis generation is serially correlated, hypothesis samples will be autocorrelated. Guided by this new problem formulation, we develop a new process, the Autocorrelated Bayesian Sampler (ABS), which grounds autocorrelated hypothesis generation in a sophisticated sampling algorithm. The ABS provides a single mechanism that qualitatively explains many empirical effects of probability judgments, estimates, confidence intervals, choice, confidence judgments, response times, and their relationships. Our analysis demonstrates the unifying power of a perspective shift in the exploration of normative models. It also exemplifies the proposal that the “Bayesian brain” operates using samples not probabilities, and that variability in human behavior may primarily reflect computational rather than sensory noise

    Human Behavior in the Context of Low-Probability High-Impact Events

    No full text
    Events with very low a-priori probability but very high impact shape our lives to a significant degree, on an individual as well as a global level. Unfortunately, people have difficulties understanding and processing the prospects of such events, leading to idiosyncratic behavior. In this article I summarize the main findings regarding human behavior in the context of low-probability high-impact events and identify the main sources of bias and other idiosyncrasies, specifically: [1] ignorance of critical events due to biased information search, [2] a false sense of security due to reinforcement learning and reliance on small samples, [3] biased evaluation of likelihood due to mental availability and affective content, and [4] inaccurate interpretation of risks due to the format by which they are communicated. I further suggest ways to mitigate these problems and areas where additional research is needed

    Human behavior in the context of low-probability high-impact events

    No full text
    Events with very low a-priori probability but very high impact shape our lives to a significant degree, on an individual as well as a global level. Unfortunately, people have difficulties understanding and processing the prospects of such events, leading to idiosyncratic behavior. In this article I summarize the main findings regarding human behavior in the context of low-probability high-impact events and identify the main sources of bias and other idiosyncrasies, specifically: [1] ignorance of critical events due to biased information search, [2] a false sense of security due to reinforcement learning and reliance on small samples, [3] biased evaluation of likelihood due to mental availability and affective content, and [4] inaccurate interpretation of risks due to the format by which they are communicated. I further suggest ways to mitigate these problems and areas where additional research is needed. Lastly, I emphasize that, in order to create useful interventions, more research on the interplay and the dynamics of effects, as well as more research based on practical rather than laboratory contexts, is needed

    The Neglected Importance of Auxiliary Assumptions when Applying Probability Theory

    No full text
    Although probability theory defines how probability measures are expressed and used, it is agnostic to where these measures comes from. To apply probability theory, one must therefore make a number of auxiliary assumptions regarding assignment and interpretation of probability. In this paper I demonstrate that these assumptions can lead to radically different conclusions that are nevertheless mathematically and philosophically coherent. I further argue that behavioral science, as a rule, does not take these assumptions into consideration, but rather conflate the conceptual interpretation of probability with its mathematical application. This creates the implicit assumption that there is one, and only one, way of correctly applying probability theory to any given situation, when in reality probability theory can usually be applied in a number of different ways, all equally correct from both a mathematical and a philosophical perspective. In order for behavioral science to progress, these auxiliary assumptions must be taken into consideration, as well as be the subject of research in their own right

    The Cognitive Basis of Joint Probability Judgments : Processes, Ecology, and Adaption

    No full text
    When navigating an uncertain world, it is often necessary to judge the probability of a conjunction of events, that is, their joint probability. The subject of this thesis is how people infer joint probabilities from probabilities of individual events. Study I explored such joint probability judgment tasks in conditions with independent events and conditions with systematic risk that could be inferred through feedback. Results indicated that participants tended to approach the tasks using additive combinations of the individual probabilities, but switch to multiplication (or, to a lesser extent, exemplar memory) when events were independent and additive strategies therefore were less accurate. Consequently, participants were initially more accurate in the task with high systematic risk, despite that task being more complex from the perspective of probability theory. Study II simulated the performance of models of joint probability judgment in tasks based both on computer generated data and real-world data-sets, to evaluate which cognitive processes are accurate in which ecological contexts. Models used in Study I and other models inspired by current research were explored. The results confirmed that, by virtue of their robustness, additive models are reasonable general purpose algorithms, although when one is familiar with the task it is preferable to switch to other strategies more specifically adapted to the task. After Study I found that people adapt strategy choice according to dependence between events and Study II confirmed that these adaptions are justified in terms of accuracy, Study III investigated whether adapting to stochastic dependence implied thinking according to stochastic principles. Results indicated that this was not the case, but that participants instead worked according to the weak assumption that events were independent, regardless of the actual state of the world. In conclusion, this thesis demonstrates that people generally do not combine individual probabilities into joint probability judgments in ways consistent with the basic principles of probability theory or think of the task in such terms, but neither does there appear to be much reason to do so. Rather, simpler heuristics can often approximate equally or more accurate judgments

    The Cognitive Basis of Joint Probability Judgments : Processes, Ecology, and Adaption

    No full text
    When navigating an uncertain world, it is often necessary to judge the probability of a conjunction of events, that is, their joint probability. The subject of this thesis is how people infer joint probabilities from probabilities of individual events. Study I explored such joint probability judgment tasks in conditions with independent events and conditions with systematic risk that could be inferred through feedback. Results indicated that participants tended to approach the tasks using additive combinations of the individual probabilities, but switch to multiplication (or, to a lesser extent, exemplar memory) when events were independent and additive strategies therefore were less accurate. Consequently, participants were initially more accurate in the task with high systematic risk, despite that task being more complex from the perspective of probability theory. Study II simulated the performance of models of joint probability judgment in tasks based both on computer generated data and real-world data-sets, to evaluate which cognitive processes are accurate in which ecological contexts. Models used in Study I and other models inspired by current research were explored. The results confirmed that, by virtue of their robustness, additive models are reasonable general purpose algorithms, although when one is familiar with the task it is preferable to switch to other strategies more specifically adapted to the task. After Study I found that people adapt strategy choice according to dependence between events and Study II confirmed that these adaptions are justified in terms of accuracy, Study III investigated whether adapting to stochastic dependence implied thinking according to stochastic principles. Results indicated that this was not the case, but that participants instead worked according to the weak assumption that events were independent, regardless of the actual state of the world. In conclusion, this thesis demonstrates that people generally do not combine individual probabilities into joint probability judgments in ways consistent with the basic principles of probability theory or think of the task in such terms, but neither does there appear to be much reason to do so. Rather, simpler heuristics can often approximate equally or more accurate judgments
    corecore