12 research outputs found

    Comparison of the ability of double-robust estimators to correct bias in propensity score matching analysis. A Monte Carlo simulation study.

    No full text
    Objective As covariates are not always adequately balanced after propensity score matching and double-adjustment can be used to remove residual confounding, we compared the performance of several double-robust estimators in different scenarios. Methods We conducted a series of Monte Carlo simulations on virtual observational studies. After estimating the propensity scores by logistic regression, we performed 1:1 optimal, nearest-neighbor and caliper matching. We used four estimators on each matched sample: i) a crude estimator without double-adjustment, ii) double-adjustment for the propensity scores, iii) double-adjustment for the unweighted unbalanced covariates, and iv) double-adjustment for the unbalanced covariates, weighted by their strength of association with the outcome. Results The crude estimator led to highest bias in all tested scenarios. Double-adjustment for the propensity scores effectively removed confounding only when the propensity score models were correctly specified. Double-adjustment for the unbalanced covariates was more robust to misspecification. Double-adjustment for the weighted unbalanced covariates outperformed the other approaches in every scenario and using any matching algorithm, as measured by the mean squared error. Conclusion Double-adjustment can be used to remove residual confounding after propensity score matching. The unbalanced covariates with the strongest confounding effects should be adjusted.</p

    Comparison of the ability of double-robust estimators to correct bias in propensity score matching analysis. A Monte Carlo simulation study.

    No full text
    Objective As covariates are not always adequately balanced after propensity score matching and double-adjustment can be used to remove residual confounding, we compared the performance of several double-robust estimators in different scenarios. Methods We conducted a series of Monte Carlo simulations on virtual observational studies. After estimating the propensity scores by logistic regression, we performed 1:1 optimal, nearest-neighbor and caliper matching. We used four estimators on each matched sample: i) a crude estimator without double-adjustment, ii) double-adjustment for the propensity scores, iii) double-adjustment for the unweighted unbalanced covariates, and iv) double-adjustment for the unbalanced covariates, weighted by their strength of association with the outcome. Results The crude estimator led to highest bias in all tested scenarios. Double-adjustment for the propensity scores effectively removed confounding only when the propensity score models were correctly specified. Double-adjustment for the unbalanced covariates was more robust to misspecification. Double-adjustment for the weighted unbalanced covariates outperformed the other approaches in every scenario and using any matching algorithm, as measured by the mean squared error. Conclusion Double-adjustment can be used to remove residual confounding after propensity score matching. The unbalanced covariates with the strongest confounding effects should be adjusted.</p

    Double-adjustment in propensity score matching analysis: choosing a threshold for considering residual imbalance

    No full text
    Double-adjustment can be used to remove confounding if imbalance exists after propensity score (PS) matching. However, it is not always possible to include all covariates in adjustment. We aimed to find the optimal imbalance threshold for entering covariates into regression.We conducted a series of Monte Carlo simulations on virtual populations of 5,000 subjects. We performed PS 1:1 nearest-neighbor matching on each sample. We calculated standardized mean differences across groups to detect any remaining imbalance in the matched samples. We examined 25 thresholds (from 0.01 to 0.25, stepwise 0.01) for considering residual imbalance. The treatment effect was estimated using logistic regression that contained only those covariates considered to be unbalanced by these thresholds.We showed that regression adjustment could dramatically remove residual confounding bias when it included all of the covariates with a standardized difference greater than 0.10. The additional benefit was negligible when we also adjusted for covariates with less imbalance. We found that the mean squared error of the estimates was minimized under the same conditions.If covariate balance is not achieved, we recommend reiterating PS modeling until standardized differences below 0.10 are achieved on most covariates. In case of remaining imbalance, a double adjustment might be worth considering

    Double-adjustment in propensity score matching analysis: choosing a threshold for considering residual imbalance

    No full text
    Double-adjustment can be used to remove confounding if imbalance exists after propensity score (PS) matching. However, it is not always possible to include all covariates in adjustment. We aimed to find the optimal imbalance threshold for entering covariates into regression.We conducted a series of Monte Carlo simulations on virtual populations of 5,000 subjects. We performed PS 1:1 nearest-neighbor matching on each sample. We calculated standardized mean differences across groups to detect any remaining imbalance in the matched samples. We examined 25 thresholds (from 0.01 to 0.25, stepwise 0.01) for considering residual imbalance. The treatment effect was estimated using logistic regression that contained only those covariates considered to be unbalanced by these thresholds.We showed that regression adjustment could dramatically remove residual confounding bias when it included all of the covariates with a standardized difference greater than 0.10. The additional benefit was negligible when we also adjusted for covariates with less imbalance. We found that the mean squared error of the estimates was minimized under the same conditions.If covariate balance is not achieved, we recommend reiterating PS modeling until standardized differences below 0.10 are achieved on most covariates. In case of remaining imbalance, a double adjustment might be worth considering

    Simple randomization did not protect against bias in smaller trials

    No full text
    OBJECTIVES: By removing systematic differences across treatment groups, simple randomization is assumed to protect against bias. However, random differences may remain if the sample size is insufficiently large. We sought to determine the minimal sample size required to eliminate random differences, thereby allowing an unbiased estimation of the treatment effect. STUDY DESIGN AND SETTING: We reanalyzed two published multicenter, large, and simple trials: the International Stroke Trial (IST) and the Coronary Artery Bypass Grafting (CABG) Off- or On-Pump Revascularization Study (CORONARY). We reiterated 1,000 times the analysis originally reported by the investigators in random samples of varying size. We measured the covariates balance across the treatment arms. We estimated the effect of aspirin and heparin on death or dependency at 30 days after stroke (IST), and the effect of off-pump CABG on a composite primary outcome of death, nonfatal stroke, nonfatal myocardial infarction, or new renal failure requiring dialysis at 30 days (CORONARY). In addition, we conducted a series of Monte Carlo simulations of randomized trials to supplement these analyses. RESULTS: Randomization removes random differences between treatment groups when including at least 1,000 participants, thereby resulting in minimal bias in effects estimation. Later, substantial bias is observed. In a short review, we show such an enrollment is achieved in 41.5% of phase 3 trials published in the highest impact medical journals. CONCLUSIONS: Conclusions drawn from completely randomized trials enrolling a few participants may not be reliable. In these circumstances, alternatives such as minimization or blocking should be considered for allocating the treatment

    Co-infection with Zika and Dengue Viruses in 2 Patients, New Caledonia, 2014.

    No full text
    International audienc
    corecore