54 research outputs found
Transporting treatment effects from difference-in-differences studies
Difference-in-differences (DID) is a popular approach to identify the causal
effects of treatments and policies in the presence of unmeasured confounding.
DID identifies the sample average treatment effect in the treated (SATT).
However, a goal of such research is often to inform decision-making in target
populations outside the treated sample. Transportability methods have been
developed to extend inferences from study samples to external target
populations; these methods have primarily been developed and applied in
settings where identification is based on conditional independence between the
treatment and potential outcomes, such as in a randomized trial. This paper
develops identification and estimators for effects in a target population,
based on DID conducted in a study sample that differs from the target
population. We present a range of assumptions under which one may identify
causal effects in the target population and employ causal diagrams to
illustrate these assumptions. In most realistic settings, results depend
critically on the assumption that any unmeasured confounders are not effect
measure modifiers on the scale of the effect of interest. We develop several
estimators of transported effects, including a doubly robust estimator based on
the efficient influence function. Simulation results support theoretical
properties of the proposed estimators. We discuss the potential application of
our approach to a study of the effects of a US federal smoke-free housing
policy, where the original study was conducted in New York City alone and the
goal is extend inferences to other US cities
Recommended from our members
Alternative causal inference methods in population health research: Evaluating tradeoffs and triangulating evidence.
Population health researchers from different fields often address similar substantive questions but rely on different study designs, reflecting their home disciplines. This is especially true in studies involving causal inference, for which semantic and substantive differences inhibit interdisciplinary dialogue and collaboration. In this paper, we group nonrandomized study designs into two categories: those that use confounder-control (such as regression adjustment or propensity score matching) and those that rely on an instrument (such as instrumental variables, regression discontinuity, or differences-in-differences approaches). Using the Shadish, Cook, and Campbell framework for evaluating threats to validity, we contrast the assumptions, strengths, and limitations of these two approaches and illustrate differences with examples from the literature on education and health. Across disciplines, all methods to test a hypothesized causal relationship involve unverifiable assumptions, and rarely is there clear justification for exclusive reliance on one method. Each method entails trade-offs between statistical power, internal validity, measurement quality, and generalizability. The choice between confounder-control and instrument-based methods should be guided by these tradeoffs and consideration of the most important limitations of previous work in the area. Our goals are to foster common understanding of the methods available for causal inference in population health research and the tradeoffs between them; to encourage researchers to objectively evaluate what can be learned from methods outside one's home discipline; and to facilitate the selection of methods that best answer the investigator's scientific questions
Powering population health research: Considerations for plausible and actionable effect sizes
Evidence for Action (E4A), a signature program of the Robert Wood Johnson
Foundation, funds investigator-initiated research on the impacts of social
programs and policies on population health and health inequities. Across
thousands of letters of intent and full proposals E4A has received since 2015,
one of the most common methodological challenges faced by applicants is
selecting realistic effect sizes to inform power and sample size calculations.
E4A prioritizes health studies that are both (1) adequately powered to detect
effect sizes that may reasonably be expected for the given intervention and (2)
likely to achieve intervention effects sizes that, if demonstrated, correspond
to actionable evidence for population health stakeholders. However, little
guidance exists to inform the selection of effect sizes for population health
research proposals. We draw on examples of five rigorously evaluated population
health interventions. These examples illustrate considerations for selecting
realistic and actionable effect sizes as inputs to power and sample size
calculations for research proposals to study population health interventions.
We show that plausible effects sizes for population health inteventions may be
smaller than commonly cited guidelines suggest. Effect sizes achieved with
population health interventions depend on the characteristics of the
intervention, the target population, and the outcomes studied. Population
health impact depends on the proportion of the population receiving the
intervention. When adequately powered, even studies of interventions with small
effect sizes can offer valuable evidence to inform population health if such
interventions can be implemented broadly. Demonstrating the effectiveness of
such interventions, however, requires large sample sizes.Comment: 24 pages, 1 figur
Recommended from our members
Building the evidence on Making Health a Shared Value: Insights and considerations for research.
The Robert Wood Johnson Foundation (RWJF)'s Culture of Health Action Framework guides a movement to improve health and advance health equity across the nation. Action Area One of the Framework, Making Health a Shared Value, highlights the role of individual and community factors in achieving a societal commitment to health and health equity, centered around three drivers: Mindset and Expectations, Sense of Community, and Civic Engagement. To stimulate research about how Action Area One and its drivers may impact health, Evidence for Action (E4A), a signature research funding program of RWJF, developed and released a national Call for Proposals (CFP). The process of formulating the CFP and reviewing proposals surfaced important challenges for research on creating and sustaining shared values to foster and maintain a Culture of Health. In this essay, we describe these considerations and provide examples from funded projects regarding how challenges can be addressed
Current trends in the application of causal inference methods to pooled longitudinal observational infectious disease studies-A protocol for a methodological systematic review
INTRODUCTION: Pooling (or combining) and analysing observational, longitudinal data at the individual level facilitates inference through increased sample sizes, allowing for joint estimation of study- and individual-level exposure variables, and better enabling the assessment of rare exposures and diseases. Empirical studies leveraging such methods when randomization is unethical or impractical have grown in the health sciences in recent years. The adoption of so-called causal methods to account for both/either measured and/or unmeasured confounders is an important addition to the methodological toolkit for understanding the distribution, progression, and consequences of infectious diseases (IDs) and interventions on IDs. In the face of the Covid-19 pandemic and in the absence of systematic randomization of exposures or interventions, the value of these methods is even more apparent. Yet to our knowledge, no studies have assessed how causal methods involving pooling individual-level, observational, longitudinal data are being applied in ID-related research. In this systematic review, we assess how these methods are used and reported in ID-related research over the last 10 years. Findings will facilitate evaluation of trends of causal methods for ID research and lead to concrete recommendations for how to apply these methods where gaps in methodological rigor are identified.
METHODS AND ANALYSIS: We will apply MeSH and text terms to identify relevant studies from EBSCO (Academic Search Complete, Business Source Premier, CINAHL, EconLit with Full Text, PsychINFO), EMBASE, PubMed, and Web of Science. Eligible studies are those that apply causal methods to account for confounding when assessing the effects of an intervention or exposure on an ID-related outcome using pooled, individual-level data from 2 or more longitudinal, observational studies. Titles, abstracts, and full-text articles, will be independently screened by two reviewers using Covidence software. Discrepancies will be resolved by a third reviewer. This systematic review protocol has been registered with PROSPERO (CRD42020204104)
Current trends in the application of causal inference methods to pooled longitudinal non-randomised data: A protocol for a methodological systematic review
Introduction Causal methods have been adopted and adapted across health disciplines, particularly for the analysis of single studies. However, the sample sizes necessary to best inform decision-making are often not attainable with single studies, making pooled individual-level data analysis invaluable for public health efforts. Researchers commonly implement causal methods prevailing in their home disciplines, and how these are selected, evaluated, implemented and reported may vary widely. To our knowledge, no article has yet evaluated trends in the implementation and reporting of causal methods in studies leveraging individual-level data pooled from several studies. We undertake this review to uncover patterns in the implementation and reporting of causal methods used across disciplines in research focused on health outcomes. We will investigate variations in methods to infer causality used across disciplines, time and geography and identify gaps in reporting of methods to inform the development of reporting standards and the conversation required to effect change. Methods and analysis We will search four databases (EBSCO, Embase, PubMed, Web of Science) using a search strategy developed with librarians from three universities (Heidelberg University, Harvard University, and University of California, San Francisco). The search strategy includes terms such as 'pool∗', 'harmoniz∗', 'cohort∗', 'observational', variations on 'individual-level data'. Four reviewers will independently screen articles using Covidence and extract data from included articles. The extracted data will be analysed descriptively in tables and graphically to reveal the pattern in methods implementation and reporting. This protocol has been registered with PROSPERO (CRD42020143148). Ethics and dissemination No ethical approval was required as only publicly available data were used. The results will be submitted as a manuscript to a peer-reviewed journal, disseminated in conferences if relevant, and published as part of doctoral dissertations in Global Health at the Heidelberg University Hospital
Application of causal inference methods in individual-participant data meta-analyses in medicine: addressing data handling and reporting gaps with new proposed reporting guidelines
Observational data provide invaluable real-world information in medicine, but certain methodological considerations are required to derive causal estimates. In this systematic review, we evaluated the methodology and reporting quality of individual-level patient data meta-analyses (IPD-MAs) conducted with non-randomized exposures, published in 2009, 2014, and 2019 that sought to estimate a causal relationship in medicine. We screened over 16,000 titles and abstracts, reviewed 45 full-text articles out of the 167 deemed potentially eligible, and included 29 into the analysis. Unfortunately, we found that causal methodologies were rarely implemented, and reporting was generally poor across studies. Specifically, only three of the 29 articles used quasi-experimental methods, and no study used G-methods to adjust for time-varying confounding. To address these issues, we propose stronger collaborations between physicians and methodologists to ensure that causal methodologies are properly implemented in IPD-MAs. In addition, we put forward a suggested checklist of reporting guidelines for IPD-MAs that utilize causal methods. This checklist could improve reporting thereby potentially enhancing the quality and trustworthiness of IPD-MAs, which can be considered one of the most valuable sources of evidence for health policy
- …