6,079 research outputs found

    Approaching the truth via belief change in propositional languages

    Get PDF
    Starting from the sixties of the past century theory change has become a main concern of philosophy of science. Two of the best known formal accounts of theory change are the post-Popperian theories of verisimilitude (PPV for short) and the AGM theory of belief change (AGM for short). In this paper, we will investigate the conceptual relations between PPV and AGM and, in particular, we will ask whether the AGM rules for theory change are effective means for approaching the truth, i.e., for achieving the cognitive aim of science pointed out by PPV. First, the key ideas of PPV and AGM and their application to a particular kind of propositional theories - the so called "conjunctive propositions" - will be illustrated. Afterwards, we will prove that, as far as conjunctive propositions are concerned, AGM belief change is an effective tool for approaching the truth

    Keep Changing Your Beliefs, Aiming for the Truth

    Get PDF

    A Verisimilitude Framework for Inductive Inference, with an Application to Phylogenetics

    Get PDF
    Bayesianism and likelihoodism are two of the most important frameworks philosophers of science use to analyse scientific methodology. However, both frameworks face a serious objection: much scientific inquiry takes place in highly idealized frameworks where all the hypotheses are known to be false. Yet, both Bayesianism and likelihoodism seem to be based on the assumption that the goal of scientific inquiry is always truth rather than closeness to the truth. Here, I argue in favor of a verisimilitude framework for inductive inference. In the verisimilitude framework, scientific inquiry is conceived of, in part, as a process where inference methods ought to be calibrated to appropriate measures of closeness to the truth. To illustrate the verisimilitude framework, I offer a reconstruction of parsimony evaluations of scientific theories, and I give a reconstruction and extended analysis of the use of parsimony inference in phylogenetics. By recasting phylogenetic inference in the verisimilitude framework, it becomes possible to both raise and address objections to phylogenetic methods that rely on parsimony

    A Verisimilitude Framework for Inductive Inference, with an Application to Phylogenetics

    Get PDF
    Bayesianism and likelihoodism are two of the most important frameworks philosophers of science use to analyse scientific methodology. However, both frameworks face a serious objection: much scientific inquiry takes place in highly idealized frameworks where all the hypotheses are known to be false. Yet, both Bayesianism and likelihoodism seem to be based on the assumption that the goal of scientific inquiry is always truth rather than closeness to the truth. Here, I argue in favor of a verisimilitude framework for inductive inference. In the verisimilitude framework, scientific inquiry is conceived of, in part, as a process where inference methods ought to be calibrated to appropriate measures of closeness to the truth. To illustrate the verisimilitude framework, I offer a reconstruction of parsimony evaluations of scientific theories, and I give a reconstruction and extended analysis of the use of parsimony inference in phylogenetics. By recasting phylogenetic inference in the verisimilitude framework, it becomes possible to both raise and address objections to phylogenetic methods that rely on parsimony

    Approaching probabilistic truths:introduction to the Topical Collection

    Get PDF
    After Karl Popper’s original work, several approaches were developed to provide a sound explication of the notion of verisimilitude. With few exceptions, these contributions have assumed that the truth to be approximated is deterministic. This collection of ten papers addresses the more general problem of approaching probabilistic truths. They include attempts to find appropriate measures for the closeness to probabilistic truth and to evaluate claims about such distances on the basis of empirical evidence. The papers employ multiple analytical approaches, and connect the research to related issues in the philosophy of science

    Models, postulates, and generalized nomic truth approximation

    Get PDF
    The qualitative theory of nomic truth approximation, presented in Kuipers in his (from instrumentalism to constructive realism, 2000), in which ‘the truth’ concerns the distinction between nomic, e.g. physical, possibilities and impossibilities, rests on a very restrictive assumption, viz. that theories always claim to characterize the boundary between nomic possibilities and impossibilities. Fully recognizing two different functions of theories, viz. excluding and representing, this paper drops this assumption by conceiving theories in development as tuples of postulates and models, where the postulates claim to exclude nomic impossibilities and the (not-excluded) models claim to represent nomic possibilities. Revising theories becomes then a matter of adding or revising models and/or postulates in the light of increasing evidence, captured by a special kind of theories, viz. ‘data-theories’. Under the assumption that the data-theory is true, achieving empirical progress in this way provides good reasons for the abductive conclusion that truth approximation has been achieved as well. Here, the notions of truth approximation and empirical progress are formally direct generalizations of the earlier ones. However, truth approximation is now explicitly defined in terms of increasing truth-content and decreasing falsity-content of theories, whereas empirical progress is defined in terms of lasting increased accepted and decreased rejected content in the light of increasing evidence. These definitions are strongly inspired by a paper of Gustavo Cevolani, Vincenzo Crupi and Roberto Festa, viz., “Verisimilitude and belief change for conjunctive theories” (Cevolani et al. in Erkenntnis 75(2):183–222, 2011)

    Tracking probabilistic truths: a logic for statistical learning

    Get PDF
    We propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds (or more generally, truth in all the worlds that are plausible enough). We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: (1) learning observable evidence obtained by repeated sampling from the unknown distribution; and (2) learning higher-order information about the distribution. The first changes only the plausibility map (via a ‘plausibilistic’ version of Bayes’ Rule), but leaves the given set of possible distributions essentially unchanged; the second rules out some distributions, thus shrinking the set of possibilities, without changing their plausibility ordering.. We look at stability of beliefs under either of these types of learning, defining two related notions (safe belief and statistical knowledge), as well as a measure of the verisimilitude of a given plausibility model. We prove a number of convergence results, showing how our agent’s beliefs track the true probability after repeated sampling, and how she eventually gains in a sense (statistical) knowledge of that true probability. Finally, we sketch the contours of a dynamic doxastic logic for statistical learning.publishedVersio
    • 

    corecore