81,103 research outputs found

    Design and Analysis of Experiments in Networks: Reducing Bias from Interference

    Get PDF
    Estimating the effects of interventions in networks is complicated due to interference, such that the outcomes for one experimental unit may depend on the treatment assignments of other units. Familiar statistical formalism, experimental designs, and analysis methods assume the absence of this interference, and result in biased estimates of causal effects when it exists. While some assumptions can lead to unbiased estimates, these assumptions are generally unrealistic in the context of a network and often amount to assuming away the interference. In this work, we evaluate methods for designing and analyzing randomized experiments under minimal, realistic assumptions compatible with broad interference, where the aim is to reduce bias and possibly overall error in estimates of average effects of a global treatment. In design, we consider the ability to perform random assignment to treatments that is correlated in the network, such as through graph cluster randomization. In analysis, we consider incorporating information about the treatment assignment of network neighbors. We prove sufficient conditions for bias reduction through both design and analysis in the presence of potentially global interference; these conditions also give lower bounds on treatment effects. Through simulations of the entire process of experimentation in networks, we measure the performance of these methods under varied network structure and varied social behaviors, finding substantial bias reductions and, despite a bias–variance tradeoff, error reductions. These improvements are largest for networks with more clustering and data generating processes with both stronger direct effects of the treatment and stronger interactions between units. Keywords: causal inference; field experiments; peer effects; spillovers; social contagion; social network analysis; graph partitionin

    DeepMed: Semiparametric Causal Mediation Analysis with Debiased Deep Learning

    Full text link
    Causal mediation analysis can unpack the black box of causality and is therefore a powerful tool for disentangling causal pathways in biomedical and social sciences, and also for evaluating machine learning fairness. To reduce bias for estimating Natural Direct and Indirect Effects in mediation analysis, we propose a new method called DeepMed that uses deep neural networks (DNNs) to cross-fit the infinite-dimensional nuisance functions in the efficient influence functions. We obtain novel theoretical results that our DeepMed method (1) can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and (2) can adapt to certain low dimensional structures of the nuisance functions, significantly advancing the existing literature on DNN-based semiparametric causal inference. Extensive synthetic experiments are conducted to support our findings and also expose the gap between theory and practice. As a proof of concept, we apply DeepMed to analyze two real datasets on machine learning fairness and reach conclusions consistent with previous findings.Comment: Accepted by NeurIPS 202

    Causal inference for social network data

    Full text link
    We describe semiparametric estimation and inference for causal effects using observational data from a single social network. Our asymptotic result is the first to allow for dependence of each observation on a growing number of other units as sample size increases. While previous methods have generally implicitly focused on one of two possible sources of dependence among social network observations, we allow for both dependence due to transmission of information across network ties, and for dependence due to latent similarities among nodes sharing ties. We describe estimation and inference for new causal effects that are specifically of interest in social network settings, such as interventions on network ties and network structure. Using our methods to reanalyze the Framingham Heart Study data used in one of the most influential and controversial causal analyses of social network data, we find that after accounting for network structure there is no evidence for the causal effects claimed in the original paper

    The "Unfriending" Problem: The Consequences of Homophily in Friendship Retention for Causal Estimates of Social Influence

    Full text link
    An increasing number of scholars are using longitudinal social network data to try to obtain estimates of peer or social influence effects. These data may provide additional statistical leverage, but they can introduce new inferential problems. In particular, while the confounding effects of homophily in friendship formation are widely appreciated, homophily in friendship retention may also confound causal estimates of social influence in longitudinal network data. We provide evidence for this claim in a Monte Carlo analysis of the statistical model used by Christakis, Fowler, and their colleagues in numerous articles estimating "contagion" effects in social networks. Our results indicate that homophily in friendship retention induces significant upward bias and decreased coverage levels in the Christakis and Fowler model if there is non-negligible friendship attrition over time.Comment: 26 pages, 4 figure
    • …
    corecore