22 research outputs found

    Beliefs about the unobserved

    Get PDF
    What should one believe about the unobserved? My thesis is a collection of four papers, each of which addresses this question. In the first paper, “Why Subjectivism?”, I consider the standing of a position called radical subjective Bayesianism, or subjectivism. The view is composed of two claims—that agents ought to be logically omniscient, and that there is no further norm of rationality—both of which are subject to seemingly conclusive objections. In this paper, I seek, if not to rehabilitate subjectivism, at least to show its critic what is attractive about the position. I show that the critics of subjectivism assume a particular view about justification, which I call the telic view, and that there exist an alternative view, the poric view, on which subjectivism is appealing. I conclude by noting that the tension between telic and poric conceptions of justification might not be an easy one to resolve. In the second paper, “Bayesianism and the Problem of Induction”, I examine and reject the two existing Bayesian takes on Hume’s problem of induction, and propose my own in their stead. In the third paper, “The Nature of Awareness Growth”, I consider the question of how to model an agent who comes to entertain a new proposition about the unobserved. I argue that, contrary to what is typically thought, awareness growth occurs by refinement of the algebra, both on the poric and the telic pictures of Bayesianism. Finally, in the fourth paper, “Objectivity and the Method of Arbitrary Functions”, I consider whether, as is widely believed, a mathematical theorem known as the method of arbitrary functions can establish that it is in virtue of systems’ dynamics that (some) scientific probabilities are objective. I differentiate between three ways in which authors have claimed that dynamics objectivise probabilities (they putatively render them: ontically interpreted, objectively evaluable, and high-level robust); and I argue that the method of arbitrary functions can establish no such claims, thus dampening the hope that constraints in what to believe about the unobserved can emerge from dynamical facts in the world

    The Bayesian and the realist: Friends or foes?

    Get PDF
    The main purpose of my thesis is to bring together two seemingly unrelated topics in the philosophy of science and extract the philosophical consequences of this exercise. The first topic is Bayesianism - a well-developed, and popular, probabilistic theory of confirmation. The second topic is Scientific Realism - the thesis that we have good reason to believe that our best scientific theories are (approximately) true. It seems natural to assume that a sophisticated probabilistic theory of confirmation is the most appropriate framework for the treatment of the issue of scientific realism. Despite this intuition, however, the bulk of the literature is conspicuous for its failure to apply the Bayesian apparatus when discussing scientific realism. Furthermore, on the rare occasions that this has been attempted, its outcomes have been strikingly negative. In my thesis I systematise and critically examine the segmented literature in order to investigate whether, and how, Bayesianism and scientific realism can be reconciled. I argue for the following claims: 1) that those realists who claim that Bayesians lack a proper notion of 'theory acceptance' have misunderstood the nature of Bayesianism as a reductive account of 'theory acceptance'; 2) that it is possible to reconstruct most of the significant alternative positions involved in the realism debate using this new account of 'theory acceptance'; 3) that Bayesianism is best seen as a general framework within which the standard informal arguments for and against realism become transparent, thus greatly clarifying the force of the realist argument; 4) that a Bayesian reconstruction does not commit one to any particular position as ultimately the right one, and, 5) that this result does not amount to succumbing to relativism. I conclude that the attempt to apply Bayesianism to the realism issue enjoys a considerable amount of success, though not enough to resolve the dispute definitively

    To Thine Own Self Be Untrue: A Diagnosis of the Cable Guy Paradox

    Get PDF
    Hájek has recently presented the following paradox. You are certain that a cable guy will visit you tomorrow between 8 a.m. and 4 p.m. but you have no further information about when. And you agree to a bet on whether he will come in the morning interval (8, 12] or in the afternoon interval (12, 4). At first, you have no reason to prefer one possibility rather than the other. But you soon realise that there will definitely be a future time at which you will (rationally) assign higher probability to an afternoon arrival than a morning one, due to time elapsing. You are also sure there may not be a future time at which you will (rationally) assign a higher probability to a morning arrival than an afternoon one. It would therefore appear that you ought to bet on an afternoon arrival. The paradox is based on the apparent incompatibility of the principle of expected utility and principles of diachronic rationality which are prima facie plausible. Hájek concludes that the latter are false, but doesn't provide a clear diagnosis as to why. We endeavour to further our understanding of the paradox by providing such a diagnosis

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    Making decisions with evidential probability and objective Bayesian calibration inductive logics

    Get PDF
    Calibration inductive logics are based on accepting estimates of relative frequencies, which are used to generate imprecise probabilities. In turn, these imprecise probabilities are intended to guide beliefs and decisions — a process called “calibration”. Two prominent examples are Henry E. Kyburg's system of Evidential Probability and Jon Williamson's version of Objective Bayesianism. There are many unexplored questions about these logics. How well do they perform in the short-run? Under what circumstances do they do better or worse? What is their performance relative to traditional Bayesianism? In this article, we develop an agent-based model of a classic binomial decision problem, including players based on variations of Evidential Probability and Objective Bayesianism. We compare the performances of these players, including against a benchmark player who uses standard Bayesian inductive logic. We find that the calibrated players can match the performance of the Bayesian player, but only with particular acceptance thresholds and decision rules. Among other points, our discussion raises some challenges for characterising “cautious” reasoning using imprecise probabilities. Thus, we demonstrate a new way of systematically comparing imprecise probability systems, and we conclude that calibration inductive logics are surprisingly promising for making decisions

    Permission to believe: descriptive and prescriptive beliefs in the Clifford/James debate

    Get PDF
    William Clifford's ‘The Ethics of Belief' proposes an ‘evidence principle': …it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence (1877, 1879:186). Its universal, absolutist language seems to hide something fundamentally correct. We first argue for excluding prescriptive beliefs, and then consider further apparent counter-examples, culminating in more restricted, qualified wording: If anything is morally wrong, then it is morally wrong within the category of descriptive belief to believe anything knowingly or irresponsibly on insufficient evidence in the absence of any conflicting and overriding moral imperative except when the unjustified believing is outside the believer's voluntary control. We test this against William James's counter-claim for qualified legitimate overbelief (‘The Will To Believe', 1896, 2000), and suggest additional benefits of adopting an evidence principle in relation to the structured combinations of descriptive and prescriptive components common to religious belief. In search of criteria for ‘sufficient' and ‘insufficient' evidence we then consider an ‘enriched' Bayesianism within normative decision theory, which helps explain good doxastic practice under risk. ‘Lottery paradox' cases however undermine the idea of an evidence threshold: we would say we justifiably believe one hypothesis while saying another, at the same credence level, is only very probably true. We consider approaches to ‘pragmatic encroachment', suggesting a parallel between ‘practical interest' and the ‘personal utility' denominating the stakes of the imaginary gambles which Bayesian credences can be illustrated as. But personal utility seems inappropriately agent-relative for a moral principle. We return to Clifford's conception of our shared responsibilities to our shared epistemic asset. This ‘practical interest we ought to have' offers an explanation for our duty, as members of an epistemic community, to get and evaluate evidence; and for the ‘utility' stakes of Bayesian imaginary gambles. Helped by Edward Craig's (1990, 1999) ‘state-of-nature' theory of knowledge it provides a minimum threshold to avoid insufficient evidence and suggests an aspirational criterion of sufficient evidence: Wherever possible, a level of evidence sufficient to support the level of justification required to be a good informant, whatever the particular circumstances of the inquirer

    Scientific uncertainty and decision making

    Get PDF
    It is important to have an adequate model of uncertainty, since decisions must be made before the uncertainty can be resolved. For instance, flood defenses must be designed before we know the future distribution of flood events. It is standardly assumed that probability theory offers the best model of uncertain information. I think there are reasons to be sceptical of this claim. I criticise some arguments for the claim that probability theory is the only adequate model of uncertainty. In particular I critique Dutch book arguments, representation theorems, and accuracy based arguments. Then I put forward my preferred model: imprecise probabilities. These are sets of probability measures. I offer several motivations for this model of uncertain belief, and suggest a number of interpretations of the framework. I also defend the model against some criticisms, including the so-called problem of dilation. I apply this framework to decision problems in the abstract. I discuss some decision rules from the literature including Levi’s E-admissibility and the more permissive rule favoured by Walley, among others. I then point towards some applications to climate decisions. My conclusions are largely negative: decision making under such severe uncertainty is inevitably difficult. I finish with a case study of scientific uncertainty. Climate modellers attempt to offer probabilistic forecasts of future climate change. There is reason to be sceptical that the model probabilities offered really do reflect the chances of future climate change, at least at regional scales and long lead times. Indeed, scientific uncertainty is multi-dimensional, and difficult to quantify. I argue that probability theory is not an adequate representation of the kinds of severe uncertainty that arise in some areas in science. I claim that this requires that we look for a better framework for modelling uncertaint

    Permission to believe: Descriptive and prescriptive beliefs in the Clifford/James debate

    Get PDF
    This thesis modifies the wording of William Clifford’s 1877 evidence principle (that ‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’) to propose an explicitly moral principle, restricted to descriptive beliefs (about what is or is not the case) and excluding prescriptive beliefs (about what ought or ought not to be the case). It considers potential counter-examples, particularly William James’s 1896 defence of religious belief; and concludes that the modified principle survives unscathed. It then searches for suitable criteria for ‘sufficient’ and ‘insufficient’ evidence, first within a Bayesian framework enriched by pragmatic considerations, and finally by returning to Clifford’s conception of our shared responsibilities to our shared epistemic asset. Combined with elements of Edward Craig’s (1990, 1999) ‘state-of-nature’ theory of knowledge this gets us to a minimum threshold to avoid insufficient evidence and an aspirational criterion of sufficient evidence
    corecore