891 research outputs found

    Confounding Equivalence in Causal Inference

    Full text link
    The paper provides a simple test for deciding, from a given causal diagram, whether two sets of variables have the same bias-reducing potential under adjustment. The test requires that one of the following two conditions holds: either (1) both sets are admissible (i.e., satisfy the back-door criterion) or (2) the Markov boundaries surrounding the manipulated variable(s) are identical in both sets. Applications to covariate selection and model testing are discussed.Comment: Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010

    Bias amplification in the g-computation algorithm for time-varying treatments: a case study of industry payments and prescription of opioid products

    Get PDF
    BACKGROUND: It is often challenging to determine which variables need to be included in the g-computation algorithm under the time-varying setting. Conditioning on instrumental variables (IVs) is known to introduce greater bias when there is unmeasured confounding in the point-treatment settings, and this is also true for near-IVs which are weakly associated with the outcome not through the treatment. However, it is unknown whether adjusting for (near-)IVs amplifies bias in the g-computation algorithm estimators for time-varying treatments compared to the estimators ignoring such variables. We thus aimed to compare the magnitude of bias by adjusting for (near-)IVs across their different relationships with treatments in the time-varying settings. METHODS: After showing a case study of the association between the receipt of industry payments and physicians' opioid prescribing rate in the US, we demonstrated Monte Carlo simulation to investigate the extent to which the bias due to unmeasured confounders is amplified by adjusting for (near-)IV across several g-computation algorithms. RESULTS: In our simulation study, adjusting for a perfect IV of time-varying treatments in the g-computation algorithm increased bias due to unmeasured confounding, particularly when the IV had a strong relationship with the treatment. We also found the increase in bias even adjusting for near-IV when such variable had a very weak association with unmeasured confounders between the treatment and the outcome compared to its association with the time-varying treatments. Instead, this bias amplifying feature was not observed (i.e., bias due to unmeasured confounders decreased) by adjusting for near-IV when it had a stronger association with the unmeasured confounders (≥0.1 correlation coefficient in our multivariate normal setting). CONCLUSION: It would be recommended to avoid adjusting for perfect IV in the g-computation algorithm to obtain a less biased estimate of the time-varying treatment effect. On the other hand, it may be recommended to include near-IV in the algorithm unless their association with unmeasured confounders is very weak. These findings would help researchers to consider the magnitude of bias when adjusting for (near-)IVs and select variables in the g-computation algorithm for the time-varying setting when they are aware of the presence of unmeasured confounding

    Outcome modelling strategies in epidemiology: traditional methods and basic alternatives.

    Get PDF
    Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the 'change-in-estimate' (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE)

    Combining multiple observational data sources to estimate causal effects

    Full text link
    The era of big data has witnessed an increasing availability of multiple data sources for statistical analyses. We consider estimation of causal effects combining big main data with unmeasured confounders and smaller validation data with supplementary information on these confounders. Under the unconfoundedness assumption with completely observed confounders, the smaller validation data allow for constructing consistent estimators for causal effects, but the big main data can only give error-prone estimators in general. However, by leveraging the information in the big main data in a principled way, we can improve the estimation efficiencies yet preserve the consistencies of the initial estimators based solely on the validation data. Our framework applies to asymptotically normal estimators, including the commonly-used regression imputation, weighting, and matching estimators, and does not require a correct specification of the model relating the unmeasured confounders to the observed variables. We also propose appropriate bootstrap procedures, which makes our method straightforward to implement using software routines for existing estimators
    corecore