264 research outputs found

    Selection on treatment in the target population of generalizabillity and transportability analyses

    Full text link
    Investigators are increasingly using novel methods for extending (generalizing or transporting) causal inferences from a trial to a target population. In many generalizability and transportability analyses, the trial and the observational data from the target population are separately sampled, following a non-nested trial design. In practical implementations of this design, non-randomized individuals from the target population are often identified by conditioning on the use of a particular treatment, while individuals who used other candidate treatments for the same indication or individuals who did not use any treatment are excluded. In this paper, we argue that conditioning on treatment in the target population changes the estimand of generalizability and transportability analyses and potentially introduces serious bias in the estimation of causal estimands in the target population or the subset of the target population using a specific treatment. Furthermore, we argue that the naive application of marginalization-based or weighting-based standardization methods does not produce estimates of any reasonable causal estimand. We use causal graphs and counterfactual arguments to characterize the identification problems induced by conditioning on treatment in the target population and illustrate the problems using simulated data. We conclude by considering the implications of our findings for applied work

    Generalizing and transporting inferences about the effects of treatment assignment subject to non-adherence

    Full text link
    We discuss the identifiability of causal estimands for generalizability and transportability analyses, both under perfect and imperfect adherence to treatment assignment. We consider a setting where the trial data contain information on baseline covariates, assignment at baseline, intervention at baseline (point treatment), and outcomes; and where the data from non-randomized individuals only contain information on baseline covariates. In this setting, we review identification results under perfect adherence and study two examples in which non-adherence severely limits the ability to transport inferences about the effects of treatment assignment to the target population. In the first example, trial participation has a direct effect on treatment receipt and, through treatment receipt, on the outcome (a "trial engagement effect" via adherence). In the second example, participation in the trial has unmeasured common causes with treatment receipt. In both examples, the effect of assignment on the outcome in the target population is not identifiable. In the first example, however, the effect of joint interventions to scale-up trial activities that affect adherence and assign treatment is identifiable. We conclude that generalizability and transportability analyses should consider trial engagement effects via adherence and selection for participation on the basis of unmeasured factors that influence adherence

    Assessing model performance for counterfactual predictions

    Full text link
    Counterfactual prediction methods are required when a model will be deployed in a setting where treatment policies differ from the setting where the model was developed, or when the prediction question is explicitly counterfactual. However, estimating and evaluating counterfactual prediction models is challenging because one does not observe the full set of potential outcomes for all individuals. Here, we discuss how to tailor a model to a counterfactual estimand, how to assess the model's performance, and how to perform model and tuning parameter selection. We also provide identifiability results for measures of performance for a potentially misspecified counterfactual prediction model based on training and test data from the same (factual) source population. Last, we illustrate the methods using simulation and apply them to the task of developing a statin-na\"{i}ve risk prediction model for cardiovascular disease

    On the causal interpretation of rate-change methods:the prior event rate ratio and rate difference

    Get PDF
    A growing number of studies use data before and after treatment initiation in groups exposed to different treatment strategies to estimate "causal effects" using a ratio measure called the prior event rate ratio (PERR). Here, we offer a causal interpretation for PERR and its additive scale analog, the prior event rate difference (PERD). We show that causal interpretation of these measures requires untestable rate-change assumptions about the relationship between (1) the change of the counterfactual ratebefore and after treatment initiation in the treated group under hypothetical intervention to implement the control treatment; and (2) the change of the factual rate before and after treatment initiation in the control group. The rate-change assumption is on the multiplicative scale for PERR, but on the additive scale for PERD; the two assumptions hold simultaneously under testable, but unlikely, conditions. Even if investigators can pick the most appropriate scale, the relevant rate-change assumption may not hold exactly, so we describe sensitivity analysis methods to examine how assumption violations of different magnitudes would affect study results. We illustrate the methods using data from a published study of proton pump inhibitors and pneumonia
    • …
    corecore