23 research outputs found

    A comparison of two approaches to implementing propensity score methods following multiple imputation

    Get PDF
    Background. In observational research on causal effects, missing data and confounding are very common problems. Multiple imputation and propensity score methods have gained increasing interest as methods to deal with these, but despite their popularity methodologists have mainly focused on how they perform in isolation.   Methods. We studied two approaches to implementing propensity score methods following multiple imputation, both of which have been used in applied research, and compared their performance by way of Monte Carlo simulation for a continuous outcome and partially unobserved covariate, treatment or outcome data. In the first, so-called Within, approach, propensity score analysis is performed within each of m imputed datasets, and the resulting m effect estimates are averaged. In the Across approach, for each subject the m estimated propensity scores are averaged first, after which the propensity score method is implemented based on each subject’s average propensity score. Because of its common use, complete case analysis was also implemented. Five propensity score estimators were studied, including regression, matching, and inverse probability weighting.   Results. The Within approach was found to be superior to the Across approach in terms of bias as well as variance in settings with missing covariate data, when missing data were missing at random as well as when they were missing completely at random. In settings with incomplete treatment or outcome values only, the Within and Across approaches yielded similar results. Complete case analysis was generally least efficient and unbiased only in scenarios where missing data were missing completely at random.   Conclusion. We advise researchers not to use the Across approach as the default method, because even when data are missing completely at random, this may yield biased effect estimates. Instead, the Within is the preferred approach when implementing propensity score methods following multiple imputation

    It's time! Ten reasons to start replicating simulation studies

    Get PDF
    The quantitative analysis of research data is a core element of empirical research. The performance of statistical methods that are used for analyzing empirical data can be evaluated and compared using computer simulations. A single simulation study can influence the analyses of thousands of empirical studies to follow. With great power comes great responsibility. Here, we argue that this responsibility includes replication of simulation studies to ensure a sound foundation for data analytical decisions. Furthermore, being designed, run, and reported by humans, simulation studies face challenges similar to other experimental empirical research and hence should not be exempt from replication attempts. We highlight that the potential replicability of simulation studies is an opportunity quantitative methodology as a field should pay more attention to

    Mecor: An R package for measurement error correction in linear regression models with a continuous outcome

    Get PDF
    Measurement error in a covariate or the outcome of regression models is common, but is often ignored, even though measurement error can lead to substantial bias in the estimated covariate-outcome association. While several texts on measurement error correction methods are available, these methods remain seldomly applied. To improve the use of measurement error correction methodology, we developed mecor, an R package that implements measurement error correction methods for regression models with a continuous outcome. Measurement error correction requires information about the measurement error model and its parameters. This information can be obtained from four types of studies, used to estimate the parameters of the measurement error model: an internal validation study, a replicates study, a calibration study and an external validation study. In the package mecor, regression calibration methods and a maximum likelihood method are implemented to correct for measurement error in a continuous covariate in regression analyses. Additionally, methods of moments methods are implemented to correct for measurement error in the continuous outcome in regression analyses. Variance estimation of the corrected estimators is provided in closed form and using the bootstrap

    Application of quantitative bias analysis for unmeasured confounding in cost-effectiveness modelling

    Get PDF
    Due to uncertainty regarding the potential impact of unmeasured confounding, health technology assessment (HTA) agencies often disregard evidence from nonrandomised studies when considering new technologies. Quantitative bias analysis (QBA) methods provide a means to quantify this uncertainty but have not been widely used in the HTA setting, particularly in the context of cost-effectiveness modelling (CEM). This study demonstrated the application of an aggregate and patient-level QBA approach to quantify and adjust for unmeasured confounding in a simulated nonrandomised comparison of survival outcomes. Application of the QBA output within a CEM through deterministic and probabilistic sensitivity analyses and under different scenarios of knowledge of an unmeasured confounder demonstrates the potential value of QBA in HTA

    Measurement error is often neglected in medical literature: a systematic review.

    Get PDF
    OBJECTIVES: In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. STUDY DESIGN AND SETTING: Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. RESULTS: Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. CONCLUSIONS: Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary

    Progestogens to prevent preterm birth in twin pregnancies: an individual participant data meta-analysis of randomized trials

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Preterm birth is the principal factor contributing to adverse outcomes in multiple pregnancies. Randomized controlled trials of progestogens to prevent preterm birth in twin pregnancies have shown no clear benefits. However, individual studies have not had sufficient power to evaluate potential benefits in women at particular high risk of early delivery (for example, women with a previous preterm birth or short cervix) or to determine adverse effects for rare outcomes such as intrauterine death.</p> <p>Methods/design</p> <p>We propose an individual participant data meta-analysis of high quality randomized, double-blind, placebo-controlled trials of progestogen treatment in women with a twin pregnancy. The primary outcome will be adverse perinatal outcome (a composite measure of perinatal mortality and significant neonatal morbidity). Missing data will be imputed within each original study, before data of the individual studies are pooled. The effects of 17-hydroxyprogesterone caproate or vaginal progesterone treatment in women with twin pregnancies will be estimated by means of a random effects log-binomial model. Analyses will be adjusted for variables used in stratified randomization as appropriate. Pre-specified subgroup analysis will be performed to explore the effect of progestogen treatment in high-risk groups.</p> <p>Discussion</p> <p>Combining individual patient data from different randomized trials has potential to provide valuable, clinically useful information regarding the benefits and potential harms of progestogens in women with twin pregnancy overall and in relevant subgroups.</p

    Best (but oft-forgotten) practices: propensity score methods in clinical nutrition research

    No full text
    In observational studies, treatment assignment is a nonrandom process and treatment groups may not be comparable in their baseline characteristics, a phenomenon known as confounding. Propensity score (PS) methods can be used to achieve comparability of treated and nontreated groups in terms of their observed covariates and, as such, control for confounding in estimating treatment effects. In this article, we provide a step-by-step guidance on how to use PS methods. For illustrative purposes, we used simulated data based on an observational study of the relation between oral nutritional supplementation and hospital length of stay. We focused on the key aspects of PS analysis, including covariate selection, PS estimation, covariate balance assessment, treatment effect estimation, and reporting. PS matching, stratification, covariate adjustment, and weighting are discussed. R codes and example data are provided to show the different steps in a PS analysis

    Investigating Risk Adjustment Methods for Health Care Provider Profiling When Observations are Scarce or Events Rare

    No full text
    Background: When profiling health care providers, adjustment for case-mix is essential. However, conventional risk adjustment methods may perform poorly, especially when provider volumes are small or events rare. Propensity score (PS) methods, commonly used in observational studies of binary treatments, have been shown to perform well when the amount of observations and/or events are low and can be extended to a multiple provider setting. The objective of this study was to evaluate the performance of different risk adjustment methods when profiling multiple health care providers that perform highly protocolized procedures, such as coronary artery bypass grafting. Methods: In a simulation study, provider effects estimated using PS adjustment, PS weighting, PS matching, and multivariable logistic regression were compared in terms of bias, coverage and mean squared error (MSE) when varying the event rate, sample size, provider volumes, and number of providers. An empirical example from the field of cardiac surgery was used to demonstrate the different methods. Results: Overall, PS adjustment, PS weighting, and logistic regression resulted in provider effects with low amounts of bias and good coverage. The PS matching and PS weighting with trimming led to biased effects and high MSE across several scenarios. Moreover, PS matching is not practical to implement when the number of providers surpasses three. Conclusions: None of the PS methods clearly outperformed logistic regression, except when sample sizes were relatively small. Propensity score matching performed worse than the other PS methods considered
    corecore