12 research outputs found

    A Verisimilitude Framework for Inductive Inference, with an Application to Phylogenetics

    Get PDF
    Bayesianism and likelihoodism are two of the most important frameworks philosophers of science use to analyse scientific methodology. However, both frameworks face a serious objection: much scientific inquiry takes place in highly idealized frameworks where all the hypotheses are known to be false. Yet, both Bayesianism and likelihoodism seem to be based on the assumption that the goal of scientific inquiry is always truth rather than closeness to the truth. Here, I argue in favor of a verisimilitude framework for inductive inference. In the verisimilitude framework, scientific inquiry is conceived of, in part, as a process where inference methods ought to be calibrated to appropriate measures of closeness to the truth. To illustrate the verisimilitude framework, I offer a reconstruction of parsimony evaluations of scientific theories, and I give a reconstruction and extended analysis of the use of parsimony inference in phylogenetics. By recasting phylogenetic inference in the verisimilitude framework, it becomes possible to both raise and address objections to phylogenetic methods that rely on parsimony

    A Verisimilitude Framework for Inductive Inference, with an Application to Phylogenetics

    Get PDF
    Bayesianism and likelihoodism are two of the most important frameworks philosophers of science use to analyse scientific methodology. However, both frameworks face a serious objection: much scientific inquiry takes place in highly idealized frameworks where all the hypotheses are known to be false. Yet, both Bayesianism and likelihoodism seem to be based on the assumption that the goal of scientific inquiry is always truth rather than closeness to the truth. Here, I argue in favor of a verisimilitude framework for inductive inference. In the verisimilitude framework, scientific inquiry is conceived of, in part, as a process where inference methods ought to be calibrated to appropriate measures of closeness to the truth. To illustrate the verisimilitude framework, I offer a reconstruction of parsimony evaluations of scientific theories, and I give a reconstruction and extended analysis of the use of parsimony inference in phylogenetics. By recasting phylogenetic inference in the verisimilitude framework, it becomes possible to both raise and address objections to phylogenetic methods that rely on parsimony

    Confirmation and the ordinal equivalence thesis

    Get PDF
    According to a widespread but implicit thesis in Bayesian confirmation theory, two confirmation measures are considered equivalent if they are ordinally equivalent --- call this the ``ordinal equivalence thesis'' (OET). I argue that adopting OET has significant costs. First, adopting OET renders one incapable of determining whether a piece of evidence substantially favors one hypothesis over another. Second, OET must be rejected if merely ordinal conclusions are to be drawn from the expected value of a confirmation measure. Furthermore, several arguments and applications of confirmation measures given in the literature already rely on a rejection of OET. I also contrast OET with stronger equivalence theses and show that they do not have the same costs as OET. On the other hand, adopting a thesis stronger than OET has costs of its own, since a rejection of OET ostensibly implies that people's epistemic states have a very fine-grained quantitative structure. However, I suggest that the normative upshot of the paper in fact has a conditional form, and that other Bayesian norms can also fruitfully be construed as having a similar conditional form

    Justifying the Norms of Inductive Inference

    Get PDF
    Bayesian inference is limited in scope because it cannot be applied in idealized contexts where none of the hypotheses under consideration is true and because it is committed to always using the likelihood as a measure of evidential favoring, even when that is inappropriate. The purpose of this paper is to study inductive inference in a very general setting where finding the truth is not necessarily the goal and where the measure of evidential favoring is not necessarily the likelihood. I use an accuracy argument to argue for probabilism and I develop a new kind of argument to argue for two general updating rules, both of which are reasonable in different contexts. One of the updating rules has standard Bayesian updating, Bissiri et al.'s (2016) general Bayesian updating, and Vassend's (2019b) quasi-Bayesian updating as special cases. The other updating rule is novel

    Sometimes it is better to do nothing: A new argument for causal decision theory

    Get PDF
    It is often thought that the main significant difference between evidential decision theory and causal decision theory is that they recommend different acts in Newcomb-style examples (broadly construed) where acts and states are correlated in peculiar ways. However, this paper presents a class of non-Newcombian examples that evidential decision theory cannot adequately model whereas causal decision theory can. Briefly, the examples involve situations where it is clearly best to perform an act that will not influence the desired outcome. On evidential decision theory—but not causal decision theory—this situation turns out to be impossible: acts that an agent does not think influence the desired outcome are never optimal. Typically, sophisticated versions of evidential decision theory emulate causal decision theoretic reasoning by (implicitly) conditioning on causal confounders, but in the kind of example considered here, this trick does not work. The upshot is that there is more to causal reasoning than has so far been appreciated.acceptedVersio

    New Semantics for Bayesian Inference: The Interpretive Problem and Its Solutions

    Get PDF
    Scientists and Bayesian statisticians often study hypotheses that they know to be false. This creates an interpretive problem because the Bayesian probability assigned to a hypothesis is typically interpreted as the probability that the hypothesis is true. I argue that solving the interpretive problem requires coming up with a new semantics for Bayesian inference. I present and contrast two solutions to the interpretive problem, both of which involve giving a new interpretation of probability. I argue that both of these new interpretations of Bayesian inference have the same advantages that the standard interpretation has, but that they have the added benefit of being applicable in a wider set of circumstances. I furthermore show that the two new interpretations are inter-translatable and I explore the conditions under which they are co-extensive with the standard Bayesian interpretation. Finally, I argue that the solutions to the interpretive problem support the claim that there is pervasive pragmatic encroachment on whether a given Bayesian probability assignment is rational

    Justifying the Norms of Inductive Inference

    Get PDF
    Bayesian inference is limited in scope because it cannot be applied in idealized contexts where none of the hypotheses under consideration is true and because it is committed to always using the likelihood as a measure of evidential favouring, even when that is inappropriate. The purpose of this article is to study inductive inference in a very general setting where finding the truth is not necessarily the goal and where the measure of evidential favouring is not necessarily the likelihood. I use an accuracy argument to argue for probabilism and I develop a new kind of argument to argue for two general updating rules, both of which are reasonable in different con texts. One of the updating rules has standard Bayesian updating, Bissiri et al.’s ([2016]) general Bayesian updating, Douven’s ([2016]) IBE-based updating, and my (Vassend ([forthcoming]) quasi-Bayesian updating as special cases. The other updating rule is novel.acceptedVersio

    Justifying the Norms of Inductive Inference

    Get PDF
    Bayesian inference is limited in scope because it cannot be applied in idealized contexts where none of the hypotheses under consideration is true and because it is committed to always using the likelihood as a measure of evidential favoring, even when that is inappropriate. The purpose of this paper is to study inductive inference in a very general setting where finding the truth is not necessarily the goal and where the measure of evidential favoring is not necessarily the likelihood. I use an accuracy argument to argue for probabilism and I develop a new kind of argument to argue for two general updating rules, both of which are reasonable in different contexts. One of the updating rules has standard Bayesian updating, Bissiri et al.'s (2016) general Bayesian updating, and Vassend's (2019b) quasi-Bayesian updating as special cases. The other updating rule is novel

    New Semantics for Bayesian Inference: The Interpretive Problem and Its Solutions

    Get PDF
    Scientists and Bayesian statisticians often study hypotheses that they know to be false. This creates an interpretive problem because the Bayesian probability assigned to a hypothesis is typically interpreted as the probability that the hypothesis is true. I argue that solving the interpretive problem requires coming up with a new semantics for Bayesian inference. I present and contrast two solutions to the interpretive problem, both of which involve giving a new interpretation of probability. I argue that both of these new interpretations of Bayesian inference have the same advantages that the standard interpretation has, but that they have the added benefit of being applicable in a wider set of circumstances. I furthermore show that the two new interpretations are inter-translatable and I explore the conditions under which they are co-extensive with the standard Bayesian interpretation. Finally, I argue that the solutions to the interpretive problem support the claim that there is pervasive pragmatic encroachment on whether a given Bayesian probability assignment is rational

    Goals and the Informativeness of Prior Probabilities

    Get PDF
    I argue that information is a goal-relative concept for Bayesians. More precisely, I argue that how much information (or confirmation) is provided by a piece of evidence depends on whether the goal is to learn the truth or to rank actions by their expected utility, and that different confirmation measures should therefore be used in different contexts. I then show how information measures may reasonably be derived from confirmation measures, and I show how to derive goal-relative non-informative and informative priors given background information. Finally, I argue that my arguments have important implications for both objective and subjective Bayesianism. In particular, the Uniqueness Thesis is either false or must be modified. Moreover, objective Bayesians must concede that pragmatic factors systematically influence which priors are rational, and subjective Bayesians must concede that pragmatic factors sometimes partly determine which prior distribution most accurately represents an agent's epistemic state
    corecore