5,291 research outputs found

    A Question of Nonsense

    Get PDF

    In Praise of the Representation Theorem

    Get PDF
    This paper will take up three of Patrick Suppes’s favourite topics: representation, invariance and causality. I begin not immediately with Suppes’s own work but with that of his Stanford colleague, Michael Friedman. Friedman argues that various high level claims of physics theories are not empirical laws at all but rather constitutive principles, principles without which the concepts of the theory would lack empirical content. I do not disagree about the need for constitutive principles. Rather I think Friedman has mislocated them, and entirely at the wrong end of the scale of abstraction. It is representation theorems, as Pat pictures them, that are the true constitutive principles, and that is true for theories far beyond physics

    Models: The Blueprints for Laws

    Get PDF
    In this paper the claim that laws of nature are to be understood as claims about what necessarily or reliably happens is disputed. Laws can characterize what happens in a reliable way, but they do not do this easily. We do not have laws for everything occurring in the world, but only for those situations where what happens in nature is represented by a model: models are blueprints for nomological machines, which in turn give rise to laws. An example from economics shows, in particular, how we use--and how we need to use--models to get probabilistic laws

    Evidence, External Validity and Explanatory Relevance

    Get PDF
    When does one fact speak for another? That is the problem of evidential relevance. Peter Achinstein’s answer, in brief: Evidential relevance = explanatory relevance.2 My own recent work investigates evidence for effectiveness predictions, which are at the core of the currently heavily mandated evidencebased policy and practice (EBPP): predictions of the form ‘Policy treatment T implemented as, when and how it would be implemented by us will result in targeted outcome O.’ RCTs, or randomized controlled trials, for T and O are taken to be the gold standard for evidence for effectiveness predictions. I question this: Not just whether they are gold-standard evidence, but more, How can they be evidence at all? What makes them relevant to the truth of the prediction that T will work for us

    Philosophy of Social Technology: Get on Board

    Get PDF

    Presidential Address: Will This Policy Work for You? Predicting Effectiveness Better: How Philosophy Helps

    Get PDF
    There is a takeover movement fast gaining influence in development economics, a movement that demands that predictions about development outcomes be based on randomized controlled trials. The problem it takes up—of using evidence of efficacy from good studies to predict whether a policy will be effective if we implement it—is a general one, and affects us all. My discussion is the result of a long struggle to develop the right concepts to deal with the problem of warranting effectiveness predictions. Whether I have it right or not, these are questions of vast social importance that philosophers of science can, and should, help answer

    Are RCTs the Gold Standard?

    Get PDF
    The claims of randomized controlled trials (RCTs) to be the gold standard rest on the fact that the ideal RCT is a deductive method: if the assumptions of the test are met, a positive result implies the appropriate causal conclusion. This is a feature that RCTs share with a variety of other methods, which thus have equal claim to being a gold standard. This article describes some of these other deductive methods and also some useful non-deductive methods, including the hypothetico-deductive method. It argues that with all deductive methods, the benefit that the conclusions follow deductively in the ideal case comes with a great cost: narrowness of scope. This is an instance of the familiar trade-off between internal and external validity. RCTs have high internal validity but the formal methodology puts severe constraints on the assumptions a target population must meet to justify exporting a conclusion from the test population to the target. The article reviews one such set of assumptions to show the kind of knowledge required. The overall conclusion is that to draw causal inferences about a target population, which method is best depends case-by-case on what background knowledge we have or can come to obtain. There is no gold standard

    The Dethronement of Laws in Science

    Get PDF

    Loose Talk Kills: What’s Worrying about Unity of Method

    Get PDF
    There is danger in stressing commonalities among methods because the differences matter in fixing the meaning of our claims. Different methods can, and often do, test the same claim. But it takes a strong network of theory and empirical results to ensure that. Failing that, we are likely to fall into inference by pun. We use one set of methods to establish a claim, then draw inferences licensed by a similar-sounding claim that calls for different methods of test. Our inferences fail and bridges we build (or policies we set) depending on them fall down
    • …
    corecore