26 research outputs found

    Science, assertion, and the common ground

    Get PDF
    I argue that the appropriateness of an assertion is sensitive to context—or, really, the “common ground”—in a way that hasn’t previously been emphasized by philosophers. This kind of context-sensitivity explains why some scientific conclusions seem to be appropriately asserted even though they are not known, believed, or justified on the available evidence. I then consider other recent attempts to account for this phenomenon and argue that if they are to be successful, they need to recognize the kind of context-sensitivity that I argue for

    How to do Things With Theory: The Instrumental Role of Auxiliary Hypotheses in Testing

    Get PDF
    Pierre Duhem's influential argument for holism relies on a view of the role that background theory plays in testing: according to this still common account of "auxiliary hypotheses," elements of background theory serve as truth-apt premises in arguments for or against a hypothesis. I argue that this view is mistaken. Rather than serving as truth-apt premises in arguments, auxiliary hypotheses are employed as (reliability-apt) "epistemic tools": instruments that perform specific tasks in connecting our theoretical questions with the world but that are not (or not usually) premises in arguments. On the resulting picture, the acceptability of an auxiliary hypothesis depends not on its truth but on contextual factors such as the task or purpose it is put to and the other tools employed alongside it

    Accuracy, Probabilism, and the Insufficiency of the Alethic

    Get PDF
    The best and most popular argument for probabilism is the accuracy-dominance argument, which purports to show that alethic considerations alone support the view that an agent's degrees of belief should always obey the axioms of probability. I argue that extant versions of the accuracy-dominance argument face a problem. In order for the mathematics of the argument to function as advertised, we must assume that every omniscient credence function is classically consistent; there can be no worlds in the set of dominance-relevant worlds that obey some other logic. This restriction cannot be motivated on alethic grounds unless we're also willing to accept that rationality requires belief in every metaphysical necessity, as the distinction between a priori logical necessities and a posteriori metaphysical ones is not an alethic distinction. To justify the restriction to classically consistent worlds, non-alethic motivation is required. And thus, if there is a version of the accuracy-dominance argument in support of probabilism, it isn't one that is grounded in alethic considerations alone

    Interpreting the Probabilistic Language in IPCC Reports

    Get PDF
    The Intergovernmental Panel on Climate Change (IPCC) often qualifies its statements by use of probabilistic “likelihood” language. In this paper, I show that this language is not properly interpreted in either frequentist or Bayesian terms—simply put, the IPCC uses both kinds of statistics to calculate these likelihoods. I then offer a deflationist interpretation: the probabilistic language expresses nothing more than how compatible the evidence is with the given hypothesis according to some method that generates normalized scores. I end by drawing some tentative normative conclusions

    Contrast Classes and Agreement in Climate Modeling

    Get PDF
    It's widely argued that agreement---or ``robustness''---across climate models isn't a useful marker of confirmation: that the models agree on a hypothesis does not indicate that that hypothesis should be accepted. The present paper argues against pinning the failure of agreement-based reasoning on the models. Instead, the problem is that agreement is a reliable marker of confirmation only when the hypotheses under consideration are mutually exclusive. Since most cutting-edge questions in climate modeling require making distinctions between mutually consistent hypotheses, agreement across models is unlikely to help answer these questions. Because the problem here is agreement (and not the models), we should expect that there are other ways of using the models that are more informative and reliable

    When is an Ensemble like a Sample?

    Get PDF
    Climate scientists often apply statistical tools to a set of different estimates generated by an “ensemble” of models. In this paper, I argue that the resulting inferences are justified in the same way as any other statistical inference: what must be demonstrated is that the statistical model that licenses the inferences accurately represents the probabilistic relationship between data and target. This view of statistical practice is appropriately termed “model-based,” and I examine the use of statistics in climate fingerprinting to show how the difficulties that climate scientists encounter in applying statistics to ensemble-generated data are the practical difficulties of normal statistical practice. The upshot is that whether the application of statistics to ensemble-generated data yields trustworthy results should be expected to vary from case to case

    Science, Assertion, and the Common Ground

    Get PDF
    I argue that the appropriateness of an assertion is sensitive to context---or, really, the “common ground”---in a way that hasn’t previously been emphasized by philosophers. This kind of context-sensitivity explains why some scientific conclusions seem to be appropriately asserted even though they are not known, believed, or justified on the available evidence. I then consider other recent attempts to account for this phenomenon and argue that if they are to be successful, they need to recognize the kind of context-sensitivity that I argue for

    Ensembles as Evidence, Not Experts: On the Value and Interpretation of Climate Models

    Get PDF
    Climate scientists frequently interpret climate models as providing probabilistic information, a practice that has come under substantial criticism from philosophers of science. In this paper, I argue that this practice has (previously unacknowledged) advantages. In particular, though the literature has focused on the use of probabilities in communicating results, climate scientists regularly treat probabilities generated by models not as the final products of research but instead as evidence or as an intermediate step in a longer reasoning process. In these cases, inter-model variation provides important information about the amount of uncertainty that is warranted by the evidence---information that can only be captured in some sort of probability distribution. Even if we accept extant arguments against the probabilistic interpretation of climate models in the context of communication, therefore, the advantages of the probabilistic interpretation of climate models in other areas makes it a substantive question whether those arguments can be extended to the more general case

    Supposition and (Statistical) Models

    Get PDF
    In a recent paper, Sprenger advances what he calls a “suppositional” answer to the question of why a Bayesian agent's credences should align with the probabilities found in statistical models. We show that Sprenger's account trades on an ambiguity between hypothetical and subjunctive suppositions and cannot succeed once we distinguish between the two
    corecore