4 research outputs found

    Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning

    Full text link
    We propose to explain the behavior of black-box prediction methods (e.g., deep neural networks trained on image pixel data) using causal graphical models. Specifically, we explore learning the structure of a causal graph where the nodes represent prediction outcomes along with a set of macro-level "interpretable" features, while allowing for arbitrary unmeasured confounding among these variables. The resulting graph may indicate which of the interpretable features, if any, are possible causes of the prediction outcome and which may be merely associated with prediction outcomes due to confounding. The approach is motivated by a counterfactual theory of causal explanation wherein good explanations point to factors which are "difference-makers" in an interventionist sense. The resulting analysis may be useful in algorithm auditing and evaluation, by identifying features which make a causal difference to the algorithm's output

    Proportionality, Determinate Intervention Effects, and High-Level Causation

    Get PDF
    Stephen Yablo’s notion of proportionality, despite controversies surrounding it, has played a significant role in philosophical discussions of mental causation and of high-level causation more generally. In particular, it is invoked in James Woodward’s interventionist account of high-level causation and explanation, and is implicit in a novel approach to constructing variables for causal modeling in the machine learning literature, known as causal feature learning (CFL). In this article, we articulate an account of proportionality inspired by both Yablo’s account of proportionality and the CFL account of variable construction. The resulting account has at least three merits. First, it illuminates an important feature of the notion of proportionality, when it is adapted to a probabilistic and interventionist framework. The feature is that at the center of the notion of proportionality lies the concept of “determinate intervention effects.” Second, it makes manifest a virtue of (common types of) high-level causal/explanatory statements over low-level ones, when relevant intervention effects are determinate. Third, it overcomes a limitation of the CFL framework and thereby also addresses a challenge to interventionist accounts of high-level causation

    Proportionality, Determinate Intervention Effects, and High-Level Causation

    Get PDF
    Stephen Yablo’s notion of proportionality, despite controversies surrounding it, has played a significant role in philosophical discussions of mental causation and of high-level causation more generally. In particular, it is invoked in James Woodward’s interventionist account of high-level causation and explanation, and is implicit in a novel approach to constructing variables for causal modeling in the machine learning literature, known as causal feature learning (CFL). In this article, we articulate an account of proportionality inspired by both Yablo’s account of proportionality and the CFL account of variable construction. The resulting account has at least three merits. First, it illuminates an important feature of the notion of proportionality, when it is adapted to a probabilistic and interventionist framework. The feature is that at the center of the notion of proportionality lies the concept of “determinate intervention effects.” Second, it makes manifest a virtue of (common types of) high-level causal/explanatory statements over low-level ones, when relevant intervention effects are determinate. Third, it overcomes a limitation of the CFL framework and thereby also addresses a challenge to interventionist accounts of high-level causation
    corecore