576 research outputs found

    The Bayesian sampler : generic Bayesian inference causes incoherence in human probability

    Get PDF
    Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample

    Probabilistic biases meet the Bayesian brain

    Get PDF
    Bayesian cognitive science sees the mind as a spectacular probabilistic inference machine. But Judgment and Decision Making research has spent half a century uncovering how dramatically and systematically people depart from rational norms. This paper outlines recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities; but approximates probabilistic calculations through drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, and offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment

    There Is No Pure Empirical Reasoning

    Get PDF
    The justificatory force of empirical reasoning always depends upon the existence of some synthetic, a priori justification. The reasoner must begin with justified, substantive constraints on both the prior probability of the conclusion and certain conditional probabilities; otherwise, all possible degrees of belief in the conclusion are left open given the premises. Such constraints cannot in general be empirically justified, on pain of infinite regress. Nor does subjective Bayesianism offer a way out for the empiricist. Despite often-cited convergence theorems, subjective Bayesians cannot hold that any empirical hypothesis is ever objectively justified in the relevant sense. Rationalism is thus the only alternative to an implausible skepticism

    Literal Perceptual Inference

    Get PDF
    In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module

    Sampling as a resource-rational constraint

    Get PDF
    Resource rationality is useful for choosing between models with the same cognitive constraints but cannot settle fundamental disagreements about what those constraints are. We argue that sampling is an especially compelling constraint, as optimizing accumulation of evidence or hypotheses minimizes the cost of time, and there are well-established models for doing so which have had tremendous success explaining human behavior

    Explanatory Judgment, Probability, and Abductive Inference

    Get PDF
    Abductive reasoning assigns special status to the explanatory power of a hypothesis. But how do people make explanatory judgments? Our study clarifies this issue by asking: (i) How does the explanatory power of a hypothesis cohere with other cognitive factors? (ii) How does probabilistic information affect explanatory judgments? In order to answer these questions, we conducted an experiment with 671 participants. Their task was to make judgments about a potentially explanatory hypothesis and its cognitive virtues. In the responses, we isolated three constructs: Explanatory Value, Rational Acceptability, and Entailment. Explanatory judgments strongly cohered with judgments of causal relevance and with a sense of understanding. Furthermore, we found that Explanatory Value was sensitive to manipulations of statistical relevance relations between hypothesis and evidence, but not to explicit information about the prior probability of the hypothesis. These results indicate that probabilistic information about statistical relevance is a strong determinant of Explanatory Value. More generally, our study suggests that abductive and probabilistic reasoning are two distinct modes of inference

    Towards an epistemic theory of probability.

    Get PDF
    The main concern of this thesis is to develop an epistemic conception of probability. In chapter one we look at Ramsey's work. In addition to his claim that the axioms of probability ace laws of consistency for partial beliefs, we focus attention on his view that the reasonableness of our probability statements does not consist merely in such coherence, but is to be assessed through the vindication of the habits which give rise to them. In chapter two we examine de Finetti's account, and compare it with Ramsey's. One significant point of divergence is de Finetti's claim that coherence is the only valid form of appraisal for probability statements. His arguments for this position depend heavily on the implementation of a Bayesian model for belief change; we argue that such an approach fails to give a satisfactory account of the relation between probabilities and objective facts. In chapter three we stake out the ground for oar own positive proposals - for an account which is non-objective in so far as it does not require the postulation of probabilistic facts, but non-subjective in the sense that probability statements are open to objective forms of appraisal. we suggest that a certain class of probability statements are best interpreted as recommendations of partial belief; these being measurable by the betting quotients that one judges to be fair. Moreover, we argue that these probability statements are open to three main forms of appraisal (each quantifiable through the use of proper scoring rules), namely: (i) Coherence (ii) Calibration (iii) Refinement. The latter two forms of appraisal are applicable both in an ex ante sense (relative to the information known by the forecaster) and an ex post one (relative to the results of the events forecast). In chapters four and five we consider certain problems which confront theories of partial belief; in particular, (1) difficulties surrounding the justification of the rule to maximise one's information, and (2) problems with the ascription of probabilities to mathematical propositions. Both of these issues seem resolvable; the first through the principle of maximising expected utility (SEU), and the second either by amending the axioms of probability, or by making use of the notion that probabilities are appraisable via scoring rules. There do remain, however, various difficulties with SEU, in particular with respect to its application in real-life situations. These are discussed, but no final conclusion reached, except that an epistemic theory such as ours is not undermined by the inapplicability of SEU in certain situations

    Of Miracles and Evidential Probability: Hume’s “Abject Failure” Vindicated

    Get PDF
    This paper defends David Hume's "Of Miracles" from John Earman's (2000) Bayesian attack by showing that Earman misrepresents Hume's argument against believing in miracles and misunderstands Hume's epistemology of probable belief. It argues, moreover, that Hume's account of evidence is fundamentally non-mathematical and thus cannot be properly represented in a Bayesian framework. Hume's account of probability is show to be consistent with a long and laudable tradition of evidential reasoning going back to ancient Roman law
    • …
    corecore