27 research outputs found

    A comparison of two approaches to implementing propensity score methods following multiple imputation

    Get PDF
    Background. In observational research on causal effects, missing data and confounding are very common problems. Multiple imputation and propensity score methods have gained increasing interest as methods to deal with these, but despite their popularity methodologists have mainly focused on how they perform in isolation.   Methods. We studied two approaches to implementing propensity score methods following multiple imputation, both of which have been used in applied research, and compared their performance by way of Monte Carlo simulation for a continuous outcome and partially unobserved covariate, treatment or outcome data. In the first, so-called Within, approach, propensity score analysis is performed within each of m imputed datasets, and the resulting m effect estimates are averaged. In the Across approach, for each subject the m estimated propensity scores are averaged first, after which the propensity score method is implemented based on each subject’s average propensity score. Because of its common use, complete case analysis was also implemented. Five propensity score estimators were studied, including regression, matching, and inverse probability weighting.   Results. The Within approach was found to be superior to the Across approach in terms of bias as well as variance in settings with missing covariate data, when missing data were missing at random as well as when they were missing completely at random. In settings with incomplete treatment or outcome values only, the Within and Across approaches yielded similar results. Complete case analysis was generally least efficient and unbiased only in scenarios where missing data were missing completely at random.   Conclusion. We advise researchers not to use the Across approach as the default method, because even when data are missing completely at random, this may yield biased effect estimates. Instead, the Within is the preferred approach when implementing propensity score methods following multiple imputation

    It's time! Ten reasons to start replicating simulation studies

    Get PDF
    The quantitative analysis of research data is a core element of empirical research. The performance of statistical methods that are used for analyzing empirical data can be evaluated and compared using computer simulations. A single simulation study can influence the analyses of thousands of empirical studies to follow. With great power comes great responsibility. Here, we argue that this responsibility includes replication of simulation studies to ensure a sound foundation for data analytical decisions. Furthermore, being designed, run, and reported by humans, simulation studies face challenges similar to other experimental empirical research and hence should not be exempt from replication attempts. We highlight that the potential replicability of simulation studies is an opportunity quantitative methodology as a field should pay more attention to

    Mecor: An R package for measurement error correction in linear regression models with a continuous outcome

    Get PDF
    Measurement error in a covariate or the outcome of regression models is common, but is often ignored, even though measurement error can lead to substantial bias in the estimated covariate-outcome association. While several texts on measurement error correction methods are available, these methods remain seldomly applied. To improve the use of measurement error correction methodology, we developed mecor, an R package that implements measurement error correction methods for regression models with a continuous outcome. Measurement error correction requires information about the measurement error model and its parameters. This information can be obtained from four types of studies, used to estimate the parameters of the measurement error model: an internal validation study, a replicates study, a calibration study and an external validation study. In the package mecor, regression calibration methods and a maximum likelihood method are implemented to correct for measurement error in a continuous covariate in regression analyses. Additionally, methods of moments methods are implemented to correct for measurement error in the continuous outcome in regression analyses. Variance estimation of the corrected estimators is provided in closed form and using the bootstrap

    Application of quantitative bias analysis for unmeasured confounding in cost-effectiveness modelling

    Get PDF
    Due to uncertainty regarding the potential impact of unmeasured confounding, health technology assessment (HTA) agencies often disregard evidence from nonrandomised studies when considering new technologies. Quantitative bias analysis (QBA) methods provide a means to quantify this uncertainty but have not been widely used in the HTA setting, particularly in the context of cost-effectiveness modelling (CEM). This study demonstrated the application of an aggregate and patient-level QBA approach to quantify and adjust for unmeasured confounding in a simulated nonrandomised comparison of survival outcomes. Application of the QBA output within a CEM through deterministic and probabilistic sensitivity analyses and under different scenarios of knowledge of an unmeasured confounder demonstrates the potential value of QBA in HTA

    Introduction to statistical simulations in health research

    Get PDF
    In health research, statistical methods are frequently used to address a wide variety of research questions. For almost every analytical challenge, different methods are available. But how do we choose between different methods and how do we judge whether the chosen method is appropriate for our specific study? Like in any science, in statistics, experiments can be run to find out which methods should be used under which circumstances. The main objective of this paper is to demonstrate that simulation studies, that is, experiments investigating synthetic data with known properties, are an invaluable tool for addressing these questions. We aim to provide a first introduction to simulation studies for data analysts or, more generally, for researchers involved at different levels in the analyses of health data, who (1) may rely on simulation studies published in statistical literature to choose their statistical methods and who, thus, need to understand the criteria of assessing the validity and relevance of simulation results and their interpretation; and/or (2) need to understand the basic principles of designing statistical simulations in order to efficiently collaborate with more experienced colleagues or start learning to conduct their own simulations. We illustrate the implementation of a simulation study and the interpretation of its results through a simple example inspired by recent literature, which is completely reproducible using the R-script available from online supplemental file 1

    Measurement error is often neglected in medical literature: a systematic review.

    Get PDF
    OBJECTIVES: In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. STUDY DESIGN AND SETTING: Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. RESULTS: Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. CONCLUSIONS: Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary

    Subgroup effects despite homogeneous heterogeneity test results

    Get PDF
    Background. Statistical tests of heterogeneity are very popular in meta-analyses, as heterogeneity might indicate subgroup effects. Lack of demonstrable statistical heterogeneity, however, might obscure clinical heterogeneity, meaning clinically relevant subgroup effects. Methods. A qualitative, visual method to explore the potential for subgroup effects was provided by a modification of the forest plot, i.e., adding a vertical axis indicating the proportion of a subgroup variable in the individual trials. Such a plot was used to assess the potential for clinically relevant subgroup effects and was illustrated by a clinical example on the effects of antibiotics in children with acute otitis media. Results. Statistical tests did not indicate heterogeneity in the meta-analysis on the effects of amoxicillin on acute otitis media (Q = 3.29, p = 0.51; I2 = 0%; T2 = 0). Nevertheless, in a modified forest plot, in which the individual trials were ordered by the proportion of children with bilateral otitis, a clear relation between bilaterality and treatment effects was observed (which was also found in an individual patient data meta-analysis of the included trials: p-value for interaction 0.021). Conclusions. A modification of the forest plot, by including an additional (vertical) axis indicating the proportion of a certain subgroup variable, is a qualitative, visual, and easy-to-interpret method to explore potential subgroup effects in studies included in meta-analyse

    Progestogens to prevent preterm birth in twin pregnancies: an individual participant data meta-analysis of randomized trials

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Preterm birth is the principal factor contributing to adverse outcomes in multiple pregnancies. Randomized controlled trials of progestogens to prevent preterm birth in twin pregnancies have shown no clear benefits. However, individual studies have not had sufficient power to evaluate potential benefits in women at particular high risk of early delivery (for example, women with a previous preterm birth or short cervix) or to determine adverse effects for rare outcomes such as intrauterine death.</p> <p>Methods/design</p> <p>We propose an individual participant data meta-analysis of high quality randomized, double-blind, placebo-controlled trials of progestogen treatment in women with a twin pregnancy. The primary outcome will be adverse perinatal outcome (a composite measure of perinatal mortality and significant neonatal morbidity). Missing data will be imputed within each original study, before data of the individual studies are pooled. The effects of 17-hydroxyprogesterone caproate or vaginal progesterone treatment in women with twin pregnancies will be estimated by means of a random effects log-binomial model. Analyses will be adjusted for variables used in stratified randomization as appropriate. Pre-specified subgroup analysis will be performed to explore the effect of progestogen treatment in high-risk groups.</p> <p>Discussion</p> <p>Combining individual patient data from different randomized trials has potential to provide valuable, clinically useful information regarding the benefits and potential harms of progestogens in women with twin pregnancy overall and in relevant subgroups.</p

    Best (but oft-forgotten) practices : propensity score methods in clinical nutrition research

    No full text
    In observational studies, treatment assignment is a nonrandom process and treatment groups may not be comparable in their baseline characteristics, a phenomenon known as confounding. Propensity score (PS) methods can be used to achieve comparability of treated and nontreated groups in terms of their observed covariates and, as such, control for confounding in estimating treatment effects. In this article, we provide a step-by-step guidance on how to use PS methods. For illustrative purposes, we used simulated data based on an observational study of the relation between oral nutritional supplementation and hospital length of stay. We focused on the key aspects of PS analysis, including covariate selection, PS estimation, covariate balance assessment, treatment effect estimation, and reporting. PS matching, stratification, covariate adjustment, and weighting are discussed. R codes and example data are provided to show the different steps in a PS analysis
    corecore