15,257 research outputs found

    Matching Methods for Causal Inference: A Review and a Look Forward

    Full text link
    When estimating causal effects using observational data, it is desirable to replicate a randomized experiment as closely as possible by obtaining treated and control groups with similar covariate distributions. This goal can often be achieved by choosing well-matched samples of the original treated and control groups, thereby reducing bias due to the covariates. Since the 1970s, work on matching methods has examined how to best choose treated and control subjects for comparison. Matching methods are gaining popularity in fields such as economics, epidemiology, medicine and political science. However, until now the literature and related advice has been scattered across disciplines. Researchers who are interested in using matching methods---or developing methods related to matching---do not have a single place to turn to learn about past and current research. This paper provides a structure for thinking about matching methods and guidance on their use, coalescing the existing research (both old and new) and providing a summary of where the literature on matching methods is now and where it should be headed.Comment: Published in at http://dx.doi.org/10.1214/09-STS313 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Observational Study Design in Veterinary Pathology, Part 1: Study Design

    Get PDF
    Observational studies are the basis for much of our knowledge of veterinary pathology and are highly relevant to the daily practice of pathology. However, recommendations for conducting pathology-based observational studies are not readily available. In part 1 of this series, we offer advice on planning and conducting an observational study with examples from the veterinary pathology literature. Investigators should recognize the importance of creativity, insight, and innovation in devising studies that solve problems and fill important gaps in knowledge. Studies should focus on specific and testable hypotheses, questions, or objectives. The methodology is developed to support these goals. We consider the merits and limitations of different types of analytic and descriptive studies, as well as of prospective vs retrospective enrollment. Investigators should define clear inclusion and exclusion criteria and select adequate numbers of study subjects, including careful selection of the most appropriate controls. Studies of causality must consider the temporal relationships between variables and the advantages of measuring incident cases rather than prevalent cases. Investigators must consider unique aspects of studies based on archived laboratory case material and take particular care to consider and mitigate the potential for selection bias and information bias. We close by discussing approaches to adding value and impact to observational studies. Part 2 of the series focuses on methodology and validation of methods

    Standardization and Control for Confounding in Observational Studies: A Historical Perspective

    Full text link
    Control for confounders in observational studies was generally handled through stratification and standardization until the 1960s. Standardization typically reweights the stratum-specific rates so that exposure categories become comparable. With the development first of loglinear models, soon also of nonlinear regression techniques (logistic regression, failure time regression) that the emerging computers could handle, regression modelling became the preferred approach, just as was already the case with multiple regression analysis for continuous outcomes. Since the mid 1990s it has become increasingly obvious that weighting methods are still often useful, sometimes even necessary. On this background we aim at describing the emergence of the modelling approach and the refinement of the weighting approach for confounder control.Comment: Published in at http://dx.doi.org/10.1214/13-STS453 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure

    Full text link
    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment
    • …
    corecore