10 research outputs found
Improving Methods for Propensity Score Analysis with Mis-Measured Variables by Incorporating Background Variables with Moderated Nonlinear Factor Analysis
There has been some research in the use of propensity scores in the context of measurement error in the confounding variables; one recommended method is to generate estimates of the mis-measured covariate using a latent variable model, and to use those estimates (i.e., factor scores) in place of the covariate. I describe a simulation study designed to examine the performance of this method in the context of differential measurement error and propose a method based on moderated nonlinear factor analysis (MNLFA) to try to address known problems with standard methods. Although MNLFA improves effect estimation somewhat in the presence of differential measurement error relative to standard factor analysis methods, the greatest gains come from the nonstandard practice of including the treatment variable as an indicator in the scoring models. More research is required on the effects of model misspecification on the performance of these methods for causal inference applications.Master of Art
Choosing the Estimand When Matching or Weighting in Observational Studies
Matching and weighting methods for observational studies require the choice
of an estimand, the causal effect with reference to a specific target
population. Commonly used estimands include the average treatment effect in the
treated (ATT), the average treatment effect in the untreated (ATU), the average
treatment effect in the population (ATE), and the average treatment effect in
the overlap (i.e., equipoise population; ATO). Each estimand has its own
assumptions, interpretation, and statistical methods that can be used to
estimate it. This article provides guidance on selecting and interpreting an
estimand to help medical researchers correctly implement statistical methods
used to estimate causal effects in observational studies and to help audiences
correctly interpret the results and limitations of these studies. The
interpretations of the estimands resulting from regression and instrumental
variable analyses are also discussed. Choosing an estimand carefully is
essential for making valid inferences from the analysis of observational data
and ensuring results are replicable and useful for practitioners
Estimating Balancing Weights for Continuous Treatments Using Constrained Optimization
In the absence of randomization, common causes of a treatment and an outcome create an association between them that does not correspond to the causal effect of the treatment. When a sufficient set of these confounding variables have been measured, statistical methods such as regression and propensity score weighting can be used to adjust for the common causes and arrive at an unbiased estimate of the causal effect. For continuous treatments, current weighting methods suffer from imprecision, bias, and reliance on correct model specification. Here, I derived the bias of the unadjusted estimate of a linear average dose-response function and developed optweights, a convex optimization-based weight estimation method that targets each component of the bias with constraints. In two simulation studies, I evaluated the performance of optweights, comparing it to regression and other weighting methods. In a common data setting, with many more units than covariates, optweights performed better than the other weighting methods in most scenarios and performed comparably to regression. In scenarios where the number of covariates approached the number of units, optweights could outperform regression in terms of mean squared error when relaxing its constraints to manage the bias-variance tradeoff. The results indicate that optweights should be considered a strong alternative to regression and other weighting methods for estimating the effects of continuous treatments, though further research is required on how to optimize its performance.Doctor of Philosoph
MatchThem:: Matching and Weighting after Multiple Imputation
Balancing the distributions of the confounders across the exposure levels in
an observational study through matching or weighting is an accepted method to
control for confounding due to these variables when estimating the association
between an exposure and outcome and to reduce the degree of dependence on
certain modeling assumptions. Despite the increasing popularity in practice,
these procedures cannot be immediately applied to datasets with missing values.
Multiple imputation of the missing data is a popular approach to account for
missing values while preserving the number of units in the dataset and
accounting for the uncertainty in the missing values. However, to the best of
our knowledge, there is no comprehensive matching and weighting software that
can be easily implemented with multiply imputed datasets. In this paper, we
review this problem and suggest a framework to map out the matching and
weighting multiply imputed datasets to 5 actions as well as the best practices
to assess balance in these datasets after matching and weighting. We also
illustrate these approaches using a companion package for R, MatchThem.Comment: 23 Pages, 3 Figure
MatchThem: Matching and Weighting Multiply Imputed Datasets
Provides the necessary tools for the pre-processing techniques of matching and weighting multiply imputed datasets to control for effects of confounders and to reduce the degree of dependence on certain modeling assumptions in studying the causal associations between an exposure and an outcome. This package includes functions to perform matching within and across the multiply imputed datasets using several matching methods, to estimate weights of units in the imputed datasets using several weighting methods, to calculate the causal effect estimate in each matched or weighted dataset using parametric or non-parametric statistical models, and to pool the obtained estimates from these models according to Rubin's rules
MatchThem:: Matching and Weighting after Multiple Imputation
Balancing the distributions of the confounders across the exposure levels in an observational study through matching or weighting is an accepted method to control for confounding due to these variables when estimating the association between an exposure and outcome and to reduce the degree of dependence on certain modeling assumptions. Despite the increasing popularity in practice, these procedures cannot be immediately applied to datasets with missing values. Multiple imputation of the missing data is a popular approach to account for missing values while preserving the number of units in the dataset and accounting for the uncertainty in the missing values. However, to the best of our knowledge, there is no comprehensive matching and weighting software that can be easily implemented with multiply imputed datasets. In this paper, we review this problem and suggest a framework to map out the matching and weighting multiply imputed datasets to 5 actions as well as the best practices to assess balance in these datasets after matching and weighting. We also illustrate these approaches using a companion package for R, MatchThem
Platelet versus fresh frozen plasma transfusion for coagulopathy in cardiac surgery patients.
BackgroundPlatelets (PLTS) and fresh frozen plasma (FFP) are often transfused in cardiac surgery patients for perioperative bleeding. Their relative effectiveness is unknown.MethodsWe conducted an entropy-weighted retrospective cohort study using the Australian and New Zealand Society of Cardiac and Thoracic Surgeons National Cardiac Surgery Database. All adults undergoing cardiac surgery between 2005-2021 across 58 sites were included. The primary outcome was operative mortality.ResultsOf 174,796 eligible patients, 15,360 (8.79%) received PLTS in the absence of FFP and 6,189 (3.54%) patients received FFP in the absence of PLTS. The median cumulative dose was 1 unit of pooled platelets (IQR 1 to 3) and 2 units of FFP (IQR 0 to 4) respectively. After entropy weighting to achieve balanced cohorts, FFP was associated with increased perioperative (Risk Ratio [RR], 1.63; 95% Confidence Interval [CI], 1.40 to 1.91; PConclusionIn perioperative bleeding in cardiac surgery patient, platelets are associated with a relative mortality benefit over FFP. This information can be used by clinicians in their choice of procoagulant therapy in this setting
A Comparison of Hypofractionated and Twice-Daily Thoracic Irradiation in Limited-Stage Small-Cell Lung Cancer: An Overlap-Weighted Analysis
Despite evidence for the superiority of twice-daily (BID) radiotherapy schedules, their utilization in practice remains logistically challenging. Hypofractionation (HFRT) is a commonly implemented alternative. We aim to compare the outcomes and toxicities in limited-stage small-cell lung cancer (LS-SCLC) patients treated with hypofractionated versus BID schedules. A bi-institutional retrospective cohort review was conducted of LS-SCLC patients treated with BID (45 Gy/30 fractions) or HFRT (40 Gy/15 fractions) schedules from 2007 to 2019. Overlap weighting using propensity scores was performed to balance observed covariates between the two radiotherapy schedule groups. Effect estimates of radiotherapy schedule on overall survival (OS), locoregional recurrence (LRR) risk, thoracic response, any ≥grade 3 (including lung, and esophageal) toxicity were determined using multivariable regression modelling. A total of 173 patients were included in the overlap-weighted analysis, with 110 patients having received BID treatment, and 63 treated by HFRT. The median follow-up was 20.4 months. Multivariable regression modelling did not reveal any significant differences in OS (hazard ratio [HR] 1.67, p = 0.38), LRR risk (HR 1.48, p = 0.38), thoracic response (odds ratio [OR] 0.23, p = 0.21), any ≥grade 3+ toxicity (OR 1.67, p = 0.33), ≥grade 3 pneumonitis (OR 1.14, p = 0.84), or ≥grade 3 esophagitis (OR 1.41, p = 0.62). HFRT, in comparison to BID radiotherapy schedules, does not appear to result in significantly different survival, locoregional control, or toxicity outcomes