1,157 research outputs found

    Surrogate Outcomes and Transportability

    Full text link
    Identification of causal effects is one of the most fundamental tasks of causal inference. We consider an identifiability problem where some experimental and observational data are available but neither data alone is sufficient for the identification of the causal effect of interest. Instead of the outcome of interest, surrogate outcomes are measured in the experiments. This problem is a generalization of identifiability using surrogate experiments and we label it as surrogate outcome identifiability. We show that the concept of transportability provides a sufficient criteria for determining surrogate outcome identifiability for a large class of queries.Comment: This is the version published in the International Journal of Approximate Reasonin

    External Validity: From Do-Calculus to Transportability Across Populations

    Full text link
    The generalizability of empirical findings to new environments, settings or populations, often called "external validity," is essential in most scientific explorations. This paper treats a particular problem of generalizability, called "transportability," defined as a license to transfer causal effects learned in experimental studies to a new population, in which only observational studies can be conducted. We introduce a formal representation called "selection diagrams" for expressing knowledge about differences and commonalities between populations of interest and, using this representation, we reduce questions of transportability to symbolic derivations in the do-calculus. This reduction yields graph-based procedures for deciding, prior to observing any data, whether causal effects in the target population can be inferred from experimental findings in the study population. When the answer is affirmative, the procedures identify what experimental and observational findings need be obtained from the two populations, and how they can be combined to ensure bias-free transport.Comment: Published in at http://dx.doi.org/10.1214/14-STS486 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: text overlap with arXiv:1312.748

    Identifiability and transportability in dynamic causal networks

    Get PDF
    In this paper we propose a causal analog to the purely observational Dynamic Bayesian Networks, which we call Dynamic Causal Networks. We provide a sound and complete algorithm for identification of Dynamic Causal Networks, namely, for computing the effect of an intervention or experiment, based on passive observations only, whenever possible. We note the existence of two types of confounder variables that affect in substantially different ways the identification procedures, a distinction with no analog in either Dynamic Bayesian Networks or standard causal graphs. We further propose a procedure for the transportability of causal effects in Dynamic Causal Network settings, where the result of causal experiments in a source domain may be used for the identification of causal effects in a target domain.Preprin

    Causal Inference and Data-Fusion in Econometrics

    Full text link
    Learning about cause and effect is arguably the main goal in applied econometrics. In practice, the validity of these causal inferences is contingent on a number of critical assumptions regarding the type of data that has been collected and the substantive knowledge that is available. For instance, unobserved confounding factors threaten the internal validity of estimates, data availability is often limited to non-random, selection-biased samples, causal effects need to be learned from surrogate experiments with imperfect compliance, and causal knowledge has to be extrapolated across structurally heterogeneous populations. A powerful causal inference framework is required to tackle these challenges, which plague most data analysis to varying degrees. Building on the structural approach to causality introduced by Haavelmo (1943) and the graph-theoretic framework proposed by Pearl (1995), the artificial intelligence (AI) literature has developed a wide array of techniques for causal learning that allow to leverage information from various imperfect, heterogeneous, and biased data sources (Bareinboim and Pearl, 2016). In this paper, we discuss recent advances in this literature that have the potential to contribute to econometric methodology along three dimensions. First, they provide a unified and comprehensive framework for causal inference, in which the aforementioned problems can be addressed in full generality. Second, due to their origin in AI, they come together with sound, efficient, and complete algorithmic criteria for automatization of the corresponding identification task. And third, because of the nonparametric description of structural models that graph-theoretic approaches build on, they combine the strengths of both structural econometrics as well as the potential outcomes framework, and thus offer a perfect middle ground between these two competing literature streams.Comment: Abstract change

    A General Algorithm for Deciding Transportability of Experimental Results

    Full text link
    Generalizing empirical findings to new environments, settings, or populations is essential in most scientific explorations. This article treats a particular problem of generalizability, called "transportability", defined as a license to transfer information learned in experimental studies to a different population, on which only observational studies can be conducted. Given a set of assumptions concerning commonalities and differences between the two populations, Pearl and Bareinboim (2011) derived sufficient conditions that permit such transfer to take place. This article summarizes their findings and supplements them with an effective procedure for deciding when and how transportability is feasible. It establishes a necessary and sufficient condition for deciding when causal effects in the target population are estimable from both the statistical information available and the causal information transferred from the experiments. The article further provides a complete algorithm for computing the transport formula, that is, a way of combining observational and experimental information to synthesize bias-free estimate of the desired causal relation. Finally, the article examines the differences between transportability and other variants of generalizability

    Manipulationism, Ceteris Paribus Laws, and the Bugbear of Background Knowledge

    Get PDF
    According to manipulationist accounts of causal explanation, to explain an event is to show how it could be changed by intervening on its cause. The relevant change must be a ‘serious possibility’ claims Woodward 2003, distinct from mere logical or physical possibility—approximating something I call ‘scientific possibility’. This idea creates significant difficulties: background knowledge is necessary for judgments of possibili-ty. Yet the primary vehicles of explanation in manipulationism are ‘invariant’ generali-sations, and these are not well adapted to encoding such knowledge, especially in the social sciences, as some of it is non-causal. Ceteris paribus (CP) laws or generalisa-tions labour under no such difficulty. A survey of research methods such as case and comparative studies, randomised control trials, ethnography, and structural equation modeling, suggests that it would be more difficult and in some instances impossible to try to represent the output of each method in invariant generalisations; and that this is because in each method causal and non-causal background knowledge mesh in a way that cannot easily be accounted for in manipulationist terms. Ceteris paribus-generalisations being superior in this regard, a theory of explanation based on the latter is a better fit for social science
    • …
    corecore