41 research outputs found

    On Causal Inferences in the Humanities and Social Sciences: Actual Causation

    Get PDF
    The last forty years have seen an explosion of research directed at causation and causal inference. Statisticians developed techniques for drawing inferences about the likely effects of proposed interventions: techniques that have been applied most noticeably in social and life sciences. Computer scientists, economists, and methodologists merged graph theory and structural equation modeling in order to develop a mathematical formalism that underwrites automated search for causal structure from data. Analytic metaphysicians and philosophers of science produced an array of theories about the nature of causation and its relationship to scientific theory and practice.Causal reasoning problems come in three varieties: effects-of-causes problems, causes-of-effects problems, and structure-learning or search problems. Causes-of-effects problems are the least well-understood of the three, in part because of confusion about exactly what problem is supposed to be solved. I claim that the problem everyone is implicitly trying to solve is the problem of identifying the actual cause(s) of a given effect, which I will call simply the problem of actual causation. My dissertation is a contribution to the search for a satisfying solution to the problem of actual causation.Towards a satisfying solution to the problem of actual causation, I clarify the nature of the problem. I argue that the only serious treatment of the problem of actual causation in the statistical literature fails because it confuses actual causation with simple difference-making. Current treatments of the problem of actual causation by philosophers and computer scientists are better but also ultimately unsatisfying. After pointing out that the best current theories fail to capture intuitions about some simple voting cases, I step back and ask a methodological question: how is the correct theory of actual causation to be discovered? I argue that intuition-fitting, whether by experimentation or by armchair, is misguided, and I recommend an alternative, pragmatic approach. I show by experiments that ordinary causal judgments are closely connected to broadly moral judgments, and I argue that actual causal inferences presuppose normative, not merely descriptive, information. I suggest that the way forward in solving the problem of actual causation is to focus on norms of proper functioning

    Counting Experiments

    Get PDF
    In this paper, I show how one might resist two influential arguments for the Likelihood Principle by appealing to the ontological significance of creative intentions. The first argument for the Likelihood Principle that I consider is the argument from intentions. After clarifying the argument, I show how the key premiss in the argument may be resisted by maintaining that creative intentions sometimes independently matter to what experiments exist. The second argument that I consider is Gandenberger’s (2015) rehabilitation of Birnbaum’s (1962) proof of the Likelihood Principle from the (supposedly) more intuitively obvious principles of conditionality and sufficiency. As with the argument from intentions, I show how Gandenberger’s argument for his Experimental Conditionality Principle may be resisted by maintaining that creative intentions sometimes independently matter to what experiments exist

    Actual Causation and Compositionality

    Get PDF
    Many theories of actual causation implicitly endorse the claim that if c is an actual cause of e, then either c causes e directly or every intermediary by which c indirectly causes e is itself both an actual cause of e and also an actual effect of c. We think this compositionality constraint is plausible. However, as we show, it is not always satisfied by the causal attributions ordinary people make. After showing that the compositionality constraint is not always satisfied by the causal attributions ordinary people make, we step back to consider what philosophers working on causation should do when the deliverances of their theories diverge from what ordinary people say

    Empirical Investigations: Reflecting on Turing and Wittgenstein on Thinking Machines

    Get PDF
    In the opening paragraph of “Computing Machinery and Intelligence” Alan Turing (1950, 433) famously notes that “if the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll.” He then immediately responds, “But this is absurd.” But why is this absurd, if indeed it is? We think that the suggested method is absurd because the answer to the question might not follow from the meaning of the words alone—it might critically depend on what machines can be built to do. Further, the ordinary use of the terms might display shallow biases or superstitions that we want to set aside, or overcome, in pursuing a science or philosophy of mind. However, we do not think that the method is absurd insofar as we are interested in getting clear on “the normal use of the words,” as Turing puts it. And we believe that getting clear on the normal use of words like “machine” and “think” is relevant even if we then want to move beyond the ordinary usage in our theorizing. But the best way to figure out how ordinary people use language is via empirical investigation

    On Experimental Philosophy and the History of Philosophy: A Reply to Sorell

    Get PDF
    In this paper, we reply to Tom Sorell’s criticism of our engagement with the history of philosophy in our book, The Theory and Practice of Experimental Philosophy. We explain why our uses of the history of philosophy are not undermined by Sorell’s criticism and why our position is not threatened by the dilemma Sorell advances. We argue that Sorell has mischaracterized the dialectical context of our discussion of the history of philosophy and that he has mistakenly treated our use of the history of philosophy as univocal, when in fact we called on the history of philosophy in several different ways in our text

    Calibrating Chromatography: How Tswett Broke the Experimenters' Regress

    Get PDF
    We propose a new account of calibration according to which calibrating a technique shows that the technique does what it is supposed to do. To motivate our account, we examine an early 20th century debate about chlorophyll chemistry and Mikhail Tswett’s use of chromatographic adsorption analysis to study it. We argue that Tswett’s experiments established that his technique was reliable in the special case of chlorophyll without relying on either a theory or a standard calibration experiment. We suggest that Tswett broke the Experimenters’ Regress by appealing to material facts in the common ground for chemists at the time

    Empirical Investigations: Reflecting on Turing and Wittgenstein on Thinking Machines

    Get PDF
    In the opening paragraph of “Computing Machinery and Intelligence” Alan Turing (1950, 433) famously notes that “if the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll.” He then immediately responds, “But this is absurd.” But why is this absurd, if indeed it is? We think that the suggested method is absurd because the answer to the question might not follow from the meaning of the words alone—it might critically depend on what machines can be built to do. Further, the ordinary use of the terms might display shallow biases or superstitions that we want to set aside, or overcome, in pursuing a science or philosophy of mind. However, we do not think that the method is absurd insofar as we are interested in getting clear on “the normal use of the words,” as Turing puts it. And we believe that getting clear on the normal use of words like “machine” and “think” is relevant even if we then want to move beyond the ordinary usage in our theorizing. But the best way to figure out how ordinary people use language is via empirical investigation

    On Experimental Philosophy and the History of Philosophy: Extended Remarks

    Get PDF
    In this paper, we reply to Tom Sorell’s criticism of our engagement with the history of philosophy in our book, The Theory and Practice of Experimental Philosophy. We explain why our uses of the history of philosophy are not undermined by Sorell’s criticism and why our position is not threatened by the dilemma Sorell advances. We argue that Sorell has mischaracterized the dialectical context of our discussion of the history of philosophy and that he has mistakenly treated our use of the history of philosophy as univocal, when in fact we called on the history of philosophy in several different ways in our text. We conclude with some remarks about the scope of philosophy generally and experimental philosophy specifically

    Behavior Genetic Frameworks of Causal Reasoning for Personality Psychology

    Get PDF
    Identifying causal relations from correlational data is a fundamental challenge in personality psychology. In most cases, random assignment is not feasible, leaving observational studies as the primary methodological tool. Here, we document several techniques from behavior genetics that attempt to demonstrate causality. Although no one method is conclusive at ruling out all possible confounds, combining techniques can triangulate on causal relations. Behavior genetic tools leverage information gained by sampling pairs of individuals with assumed genetic and environmental relatedness or by measuring genetic variants in unrelated individuals. These designs can find evidence consistent with causality, while simultaneously providing strong controls against common confounds. We conclude by discussing several potential problems that may limit the utility of these techniques when applied to personality. Ultimately, genetically informative designs can aid in drawing causal conclusions from correlational studies

    Behavior Genetic Frameworks of Causal Reasoning for Personality Psychology

    Get PDF
    Identifying causal relations from correlational data is a fundamental challenge in personality psychology. In most cases, random assignment is not feasible, leaving observational studies as the primary methodological tool. Here, we document several techniques from behavior genetics that attempt to demonstrate causality. Although no one method is conclusive at ruling out all possible confounds, combining techniques can triangulate on causal relations. Behavior genetic tools leverage information gained by sampling pairs of individuals with assumed genetic and environmental relatedness or by measuring genetic variants in unrelated individuals. These designs can find evidence consistent with causality, while simultaneously providing strong controls against common confounds. We conclude by discussing several potential problems that may limit the utility of these techniques when applied to personality. Ultimately, genetically informative designs can aid in drawing causal conclusions from correlational studies
    corecore