548 research outputs found

    An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies

    Get PDF
    The propensity score is the probability of treatment assignment conditional on observed baseline characteristics. The propensity score allows one to design and analyze an observational (nonrandomized) study so that it mimics some of the particular characteristics of a randomized controlled trial. In particular, the propensity score is a balancing score: conditional on the propensity score, the distribution of observed baseline covariates will be similar between treated and untreated subjects. I describe 4 different propensity score methods: matching on the propensity score, stratification on the propensity score, inverse probability of treatment weighting using the propensity score, and covariate adjustment using the propensity score. I describe balance diagnostics for examining whether the propensity score model has been adequately specified. Furthermore, I discuss differences between regression-based methods and propensity score-based methods for the analysis of observational data. I describe different causal average treatment effects and their relationship with propensity score analyses

    Inference with interference between units in an fMRI experiment of motor inhibition

    Full text link
    An experimental unit is an opportunity to randomly apply or withhold a treatment. There is interference between units if the application of the treatment to one unit may also affect other units. In cognitive neuroscience, a common form of experiment presents a sequence of stimuli or requests for cognitive activity at random to each experimental subject and measures biological aspects of brain activity that follow these requests. Each subject is then many experimental units, and interference between units within an experimental subject is likely, in part because the stimuli follow one another quickly and in part because human subjects learn or become experienced or primed or bored as the experiment proceeds. We use a recent fMRI experiment concerned with the inhibition of motor activity to illustrate and further develop recently proposed methodology for inference in the presence of interference. A simulation evaluates the power of competing procedures.Comment: Published by Journal of the American Statistical Association at http://www.tandfonline.com/doi/full/10.1080/01621459.2012.655954 . R package cin (Causal Inference for Neuroscience) implementing the proposed method is freely available on CRAN at https://CRAN.R-project.org/package=ci

    Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

    Full text link
    As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.Comment: 7 pages, 2 figures, 2 tables.To appear in Proceedings of the International Conference on World Wide Web (WWW), 201

    Evaluating the Impacts of Subsidies on Innovation Activities in Germany

    Get PDF
    Innovations are a key factor to ensure the competitiveness of establishments as well as to enhance the growth and wealth of nations. But more than any other economic activity, decisions about innovations are plagued by failures of the market mechanism. As a response, public instruments have been implemented to stimulate private innovation activities. The effectiveness of these measures, however, is ambiguous and calls for an empirical evaluation. In this paper we make use of the IAB Establishment Panel and apply various microeconometric methods to estimate the effect of public measures on innovation activities of German establishments. We find that neglecting sample selection due to observable as well as to unobservable characteristics leads to an overestimation of the treatment effect and that there are considerable differences with regard to size class and betweenWest and East German establishments

    A Nonparametric Partially Identified Estimator for Equivalence Scales

    Full text link
    Methods for estimating equivalence scales usually rely on rather strong identifying assumptions. This paper considers a partially identified estimator for equivalence scales derived from the potential outcomes framework and using nonparametric methods for estimation, which requires only mild assumptions. Instead of point estimates, the method yields only lower and upper bounds of equivalence scales. Results of an analysis using German expenditure data show that the range implied by these bounds is rather wide, but can be reduced using additional covariates.Methoden zur Ermittlung von Äquivalenzskalen gehen in aller Regel von relativ starken Annahmen aus. In der vorliegenden Arbeit wird ein partiell identifizierter Schätzer für Äquivalenzskalen vorgestellt, der auf dem Potential Outcomes Ansatz basiert und nur vergleichsweise schwache Annahmen benötigt. Anstelle eines Punktschätzers liefert das Verfahren nur Unter- und Obergrenzen für Äquivalenzskalen. Eine Anwendung auf Daten der Einkommens- und Verbrauchsstichprobe zeigt, dass die durch Unter- und Obergrenze gegebenen Intervalle oftmals sehr breit sind. Die Breite der Intervalle kann reduziert werden, wenn zusätzliche Variablen bei der Schätzung herangezogen werden
    • …
    corecore