316 research outputs found

    Mediation Analysis Without Sequential Ignorability: Using Baseline Covariates Interacted with Random Assignment as Instrumental Variables

    Full text link
    In randomized trials, researchers are often interested in mediation analysis to understand how a treatment works, in particular how much of a treatment's effect is mediated by an intermediated variable and how much the treatment directly affects the outcome not through the mediator. The standard regression approach to mediation analysis assumes sequential ignorability of the mediator, that is that the mediator is effectively randomly assigned given baseline covariates and the randomized treatment. Since the experiment does not randomize the mediator, sequential ignorability is often not plausible. Ten Have et al. (2007, Biometrics), Dunn and Bentall (2007, Statistics in Medicine) and Albert (2008, Statistics in Medicine) presented methods that use baseline covariates interacted with random assignment as instrumental variables, and do not require sequential ignorability. We make two contributions to this approach. First, in previous work on the instrumental variable approach, it has been assumed that the direct effect of treatment and the effect of the mediator are constant across subjects; we allow for variation in effects across subjects and show what assumptions are needed to obtain consistent estimates for this setting. Second, we develop a method of sensitivity analysis for violations of the key assumption that the direct effect of the treatment and the effect of the mediator do not depend on the baseline covariates

    Comment: The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation

    Get PDF
    Comment on ``The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation'' [arXiv:0910.3752]Comment: Published in at http://dx.doi.org/10.1214/09-STS274B the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Error-free milestones in error prone measurements

    Get PDF
    A predictor variable or dose that is measured with substantial error may possess an error-free milestone, such that it is known with negligible error whether the value of the variable is to the left or right of the milestone. Such a milestone provides a basis for estimating a linear relationship between the true but unknown value of the error-free predictor and an outcome, because the milestone creates a strong and valid instrumental variable. The inferences are nonparametric and robust, and in the simplest cases, they are exact and distribution free. We also consider multiple milestones for a single predictor and milestones for several predictors whose partial slopes are estimated simultaneously. Examples are drawn from the Wisconsin Longitudinal Study, in which a BA degree acts as a milestone for sixteen years of education, and the binary indicator of military service acts as a milestone for years of service.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS233 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Sensitivity Analysis for Multiple Comparisons in Matched Observational Studies through Quadratically Constrained Linear Programming

    Full text link
    A sensitivity analysis in an observational study assesses the robustness of significant findings to unmeasured confounding. While sensitivity analyses in matched observational studies have been well addressed when there is a single outcome variable, accounting for multiple comparisons through the existing methods yields overly conservative results when there are multiple outcome variables of interest. This stems from the fact that unmeasured confounding cannot affect the probability of assignment to treatment differently depending on the outcome being analyzed. Existing methods allow this to occur by combining the results of individual sensitivity analyses to assess whether at least one hypothesis is significant, which in turn results in an overly pessimistic assessment of a study's sensitivity to unobserved biases. By solving a quadratically constrained linear program, we are able to perform a sensitivity analysis while enforcing that unmeasured confounding must have the same impact on the treatment assignment probabilities across outcomes for each individual in the study. We show that this allows for uniform improvements in the power of a sensitivity analysis not only for testing the overall null of no effect, but also for null hypotheses on \textit{specific} outcome variables while strongly controlling the familywise error rate. We illustrate our method through an observational study on the effect of smoking on naphthalene exposure
    • …
    corecore