102 research outputs found

    Belief Revision for Growing Awareness

    Get PDF
    The Bayesian maxim for rational learning could be described as conservative change from one probabilistic belief or credence function to another in response to newinformation. Roughly: ‘Hold fixed any credences that are not directly affected by the learning experience.’ This is precisely articulated for the case when we learn that some proposition that we had previously entertained is indeed true (the rule of conditionalisation). But can this conservative-change maxim be extended to revising one’s credences in response to entertaining propositions or concepts of which one was previously unaware? The economists Karni and Vierø (2013, 2015) make a proposal in this spirit. Philosophers have adopted effectively the same rule: revision in response to growing awareness should not affect the relative probabilities of propositions in one’s ‘old’ epistemic state. The rule is compelling, but only under the assumptions that its advocates introduce. It is not a general requirement of rationality, or so we argue. We provide informal counterexamples. And we show that, when awareness grows, the boundary between one’s ‘old’ and ‘new’ epistemic commitments is blurred. Accordingly, there is no general notion of conservative change in this setting

    Beliefs about the unobserved

    Get PDF
    What should one believe about the unobserved? My thesis is a collection of four papers, each of which addresses this question. In the first paper, “Why Subjectivism?”, I consider the standing of a position called radical subjective Bayesianism, or subjectivism. The view is composed of two claims—that agents ought to be logically omniscient, and that there is no further norm of rationality—both of which are subject to seemingly conclusive objections. In this paper, I seek, if not to rehabilitate subjectivism, at least to show its critic what is attractive about the position. I show that the critics of subjectivism assume a particular view about justification, which I call the telic view, and that there exist an alternative view, the poric view, on which subjectivism is appealing. I conclude by noting that the tension between telic and poric conceptions of justification might not be an easy one to resolve. In the second paper, “Bayesianism and the Problem of Induction”, I examine and reject the two existing Bayesian takes on Hume’s problem of induction, and propose my own in their stead. In the third paper, “The Nature of Awareness Growth”, I consider the question of how to model an agent who comes to entertain a new proposition about the unobserved. I argue that, contrary to what is typically thought, awareness growth occurs by refinement of the algebra, both on the poric and the telic pictures of Bayesianism. Finally, in the fourth paper, “Objectivity and the Method of Arbitrary Functions”, I consider whether, as is widely believed, a mathematical theorem known as the method of arbitrary functions can establish that it is in virtue of systems’ dynamics that (some) scientific probabilities are objective. I differentiate between three ways in which authors have claimed that dynamics objectivise probabilities (they putatively render them: ontically interpreted, objectively evaluable, and high-level robust); and I argue that the method of arbitrary functions can establish no such claims, thus dampening the hope that constraints in what to believe about the unobserved can emerge from dynamical facts in the world

    Scientific uncertainty and decision making

    Get PDF
    It is important to have an adequate model of uncertainty, since decisions must be made before the uncertainty can be resolved. For instance, flood defenses must be designed before we know the future distribution of flood events. It is standardly assumed that probability theory offers the best model of uncertain information. I think there are reasons to be sceptical of this claim. I criticise some arguments for the claim that probability theory is the only adequate model of uncertainty. In particular I critique Dutch book arguments, representation theorems, and accuracy based arguments. Then I put forward my preferred model: imprecise probabilities. These are sets of probability measures. I offer several motivations for this model of uncertain belief, and suggest a number of interpretations of the framework. I also defend the model against some criticisms, including the so-called problem of dilation. I apply this framework to decision problems in the abstract. I discuss some decision rules from the literature including Levi’s E-admissibility and the more permissive rule favoured by Walley, among others. I then point towards some applications to climate decisions. My conclusions are largely negative: decision making under such severe uncertainty is inevitably difficult. I finish with a case study of scientific uncertainty. Climate modellers attempt to offer probabilistic forecasts of future climate change. There is reason to be sceptical that the model probabilities offered really do reflect the chances of future climate change, at least at regional scales and long lead times. Indeed, scientific uncertainty is multi-dimensional, and difficult to quantify. I argue that probability theory is not an adequate representation of the kinds of severe uncertainty that arise in some areas in science. I claim that this requires that we look for a better framework for modelling uncertaint

    Expected utility theory, Jeffrey’s decision theory, and the paradoxes

    Get PDF
    In Richard Bradley’s book, Decision Theory with a Human Face, we have selected two themes for discussion. The first is the Bolker-Jeffrey theory of decision, which the book uses throughout as a tool to reorganize the whole field of decision theory, and in particular to evaluate the extent to which expected utility theories may be normatively too demanding. The second theme is the redefinition strategy that can be used to defend EU theories against the Allais and Ellsberg paradoxes, a strategy that the book by and large endorses, and even develops in an original way concerning the Ellsberg paradox. We argue that the BJ theory is too specific to fulfil Bradley’s foundational project and that the redefinition strategy fails in both the Allais and Ellsberg cases. Although we share Bradley’s conclusion that EU theories do not state universal rationality requirements, we reach it not by a comparison with BJ theory, but by a comparison with the non-EU theories that the paradoxes have heuristically suggested

    Policymaking under scientific uncertainty

    Get PDF
    Policymakers who seek to make scientifically informed decisions are constantly confronted by scientific uncertainty and expert disagreement. This thesis asks: how can policymakers rationally respond to expert disagreement and scientific uncertainty? This is a work of nonideal theory, which applies formal philosophical tools developed by ideal theorists to more realistic cases of policymaking under scientific uncertainty. I start with Bayesian approaches to expert testimony and the problem of expert disagreement, arguing that two popular approaches— supra-Bayesianism and the standard model of expert deference—are insufficient. I develop a novel model of expert deference and show how it can deal with many of these problems raised for them. I then turn to opinion pooling, a popular method for dealing with disagreement. I show that various theoretical motivations for pooling functions are irrelevant to realistic policymaking cases. This leads to a cautious recommendation of linear pooling. However, I then show that any pooling method relies on value judgements, that are hidden in the selection of the scoring rule. My focus then narrows to a more specific case of scientific uncertainty: multiple models of the same system. I introduce a particular case study involving hurricane models developed to support insurance decision-making. I recapitulate my analysis of opinion pooling in the context of model ensembles, confirming that my hesitations apply. This motivates a shift of perspective, to viewing the problem as a decision theoretic one. I rework a recently developed ambiguity theory, called the confidence approach, to take input from model ensembles. I show how it facilitates the resolution of the policymaker’s problem in a way that avoids the issues encountered in previous chapters. This concludes my main study of the problem of expert disagreement. In the final chapter, I turn to methodological reflection. I argue that philosophers who employ the mathematical methods of the prior chapters are modelling. Employing results from the philosophy of scientific models, I develop the theory of normative modelling. I argue that it has important methodological conclusions for the practice of formal epistemology, ruling out popular moves such as searching for counterexamples

    The Epistemic Value of Conceptualizing the Possible

    Get PDF

    The double life of probability:A philosophical study of chance and credence

    Get PDF
    Dit proefschrift betreft een filosofische studie van twee concepten van waarschijnlijkheid en hun onderlinge relatie. Het gaat om een subjectief/persoonlijk concept genaamd 'overtuiging' en een objectief/fysisch concept genaamd 'kans'. In dit proefschrift worden verschillende principes en condities geïntroduceerd en benut die gaan over relaties tussen kansen en overtuigingen, a priori kansen en a posteriori kansen, en a priori overtuigingen en a posteriori overtuigingen. Het hoofddoel is om aan te tonen dat een studie van deze principes een vruchtbare manier is om over kansen en overtuigingen na te denken. Het tweede doel is om aan te tonen hoe deze principes gecombineerd kunnen worden met enkele gevestigde argumentatieve strategieën, om zodoende inzicht te verschaffen in beide concepten van waarschijnlijkheid

    A study of risk-aware program transformation

    Get PDF
    In the trend towards tolerating hardware unreliability, accuracy is exchanged for cost savings. Running on less reliable machines, functionally correct code becomes risky and one needs to know how risk propagates so as to mitigate it. Risk estimation, however, seems to live outside the average programmer’s technical competence and core practice. In this paper we propose that program design by source-to-source transformation be risk-aware in the sense of making probabilistic faults visible and supporting equational reasoning on the probabilistic behaviour of programs caused by faults. This reasoning is carried out in a linear algebra extension to the standard, `a la Bird-Moor algebra of programming. This paper studies, in particular, the propagation of faults across standard program transformation techniques known as tupling and fusion, enabling the fault of the whole to be expressed in terms of the faults of its parts.Fundação para a Ciência e a Tecnologia, Portugal, under grant number BI1-2012 PTDC/EIA-CCO/122240/2010 UMINHO
    corecore