5,040 research outputs found

    Probabilities with Gaps and Gluts

    Get PDF

    Policymaking under scientific uncertainty

    Get PDF
    Policymakers who seek to make scientifically informed decisions are constantly confronted by scientific uncertainty and expert disagreement. This thesis asks: how can policymakers rationally respond to expert disagreement and scientific uncertainty? This is a work of nonideal theory, which applies formal philosophical tools developed by ideal theorists to more realistic cases of policymaking under scientific uncertainty. I start with Bayesian approaches to expert testimony and the problem of expert disagreement, arguing that two popular approaches— supra-Bayesianism and the standard model of expert deference—are insufficient. I develop a novel model of expert deference and show how it can deal with many of these problems raised for them. I then turn to opinion pooling, a popular method for dealing with disagreement. I show that various theoretical motivations for pooling functions are irrelevant to realistic policymaking cases. This leads to a cautious recommendation of linear pooling. However, I then show that any pooling method relies on value judgements, that are hidden in the selection of the scoring rule. My focus then narrows to a more specific case of scientific uncertainty: multiple models of the same system. I introduce a particular case study involving hurricane models developed to support insurance decision-making. I recapitulate my analysis of opinion pooling in the context of model ensembles, confirming that my hesitations apply. This motivates a shift of perspective, to viewing the problem as a decision theoretic one. I rework a recently developed ambiguity theory, called the confidence approach, to take input from model ensembles. I show how it facilitates the resolution of the policymaker’s problem in a way that avoids the issues encountered in previous chapters. This concludes my main study of the problem of expert disagreement. In the final chapter, I turn to methodological reflection. I argue that philosophers who employ the mathematical methods of the prior chapters are modelling. Employing results from the philosophy of scientific models, I develop the theory of normative modelling. I argue that it has important methodological conclusions for the practice of formal epistemology, ruling out popular moves such as searching for counterexamples

    L\"uders' and quantum Jeffrey's rules as entropic projections

    Full text link
    We prove that the standard quantum mechanical description of a quantum state change due to measurement, given by Lueders' rules, is a special case of the constrained maximisation of a quantum relative entropy functional. This result is a quantum analogue of the derivation of the Bayes--Laplace rule as a special case of the constrained maximisation of relative entropy. The proof is provided for the Umegaki relative entropy of density operators over a Hilbert space as well as for the Araki relative entropy of normal states over a W*-algebra. We also introduce a quantum analogue of Jeffrey's rule, derive it in the same way as above, and discuss the meaning of these results for quantum bayesianism

    Imaging Uncertainty

    Get PDF
    The technique of imaging was first introduced by Lewis (1976), in order to provide a novel account of the probability of conditional propositions. In the intervening years, imaging has been the object of significant interest in both AI and philosophy, and has come to be seen as a philosophically important approach to probabilistic updating and belief revision. In this paper, we consider the possibility of generalising imaging to deal with uncertain evidence and partial belief revision. In particular, we introduce a new logical criterion that any update rule should satisfy, and use it to evaluate a range of different approaches to generalising imaging to situations involving uncertain evidence. We show that none of the currently prevalent approaches to imaging allow for such a generalisation, although a lesser known version of imaging, introduced by Joyce (2010), can be generalised in a way that mitigates these problems

    Imaging Uncertainty

    Get PDF
    The technique of imaging was first introduced by Lewis (1976), in order to provide a novel account of the probability of conditional propositions. In the intervening years, imaging has been the object of significant interest in both AI and philosophy, and has come to be seen as a philosophically important approach to probabilistic updating and belief revision. In this paper, we consider the possibility of generalising imaging to deal with uncertain evidence and partial belief revision. In particular, we introduce a new logical criterion that any update rule should satisfy, and use it to evaluate a range of different approaches to generalising imaging to situations involving uncertain evidence. We show that none of the currently prevalent approaches to imaging allow for such a generalisation, although a lesser known version of imaging, introduced by Joyce (2010), can be generalised in a way that mitigates these problems

    Updating: A Psychologically Basic Situation of Probability Revision

    Get PDF
    The Bayesian model has been used in psychology as the standard reference for the study of probability revision. In the first part of this paper we show that this traditional choice restricts the scope of the experimental investigation of revision to a stable universe. This is the case of a situation that, technically, is known as focusing. We argue that it is essential for a better understanding of human probability revision to consider another situation called updating (Katsuno & Mendelzon, 1992), in which the universe is evolving. In that case the structure of the universe has definitely been transformed and the revision message conveys information on the resulting universe. The second part of the paper presents four experiments based on the Monty Hall puzzle that aim to show that updating is a natural frame for individuals to revise their beliefs

    Bayesian Argumentation and the Value of Logical Validity

    Get PDF
    According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that (i)utilizes a new class of Bayesian learning methods that are better suited to modelling dynamic and conditional inferences than standard Bayesian conditionalization, (ii) is able to characterise the special value of logically valid argument schemes in uncertain reasoning contexts, (iii) greatly extends the range of inferences and argumentative phenomena that can be adequately described in a Bayesian framework, and (iv) undermines some influential theoretical motivations for dual function models of human cognition. We conclude that the probabilistic norms given by the Bayesian approach to rationality are not necessarily at odds with the norms given by classical logic. Rather, the Bayesian theory of argumentation can be seen as justifying and enriching the argumentative norms of classical logic
    • …
    corecore