2,661 research outputs found

    On selection of models for continuos meta analysis data with incomplete variability measures

    Get PDF
    The choice between the fixed and random effects models for providing an overall meta analysis estimates may affect the accuracy of those estimates. When the study-level standard deviations (SDs) are not completely reported or are “missing” selection of a meta analysis model should be done with more caution. In this article, we examine through a simulation study, the effects of the choice of meta analysis model and the techniques of imputation of the missing SDs on the overall meta analysis estimates. The results suggest that imputation should be adopted to estimate the overall effect size, irrespective of the model used. However, the accuracy of the estimates of the corresponding standard error (SE) are influenced by the imputation techniques. For estimates based on the fixed effect model, mean imputation provides better estimates than multiple imputation, while those based on the random effects model are the more robust of the techniques imputation used

    Validation of methods for converting the original Disease Activity Score (DAS) to the DAS28

    Get PDF
    © The Author(s) 2018.The Disease Activity Score (DAS) is integral in tailoring the clinical management of rheumatoid arthritis (RA) patients and is an important measure in clinical research. Different versions have been developed over the years to improve reliability and ease of use. Combining the original DAS and the newer DAS28 data in both contemporary and historical studies is important for both primary and secondary data analyses. As such, a methodologically robust means of converting the old DAS to the new DAS28 measure would be invaluable. Using data from The Early RA Study (ERAS), a sub-sample of patients with both DAS and DAS28 data were used to develop new regression imputation formulas using the total DAS score (univariate), and using the separate components of the DAS score (multivariate). DAS were transformed to DAS28 using an existing formula quoted in the literature, and the newly developed formulas. Bland and Altman plots were used to compare the transformed DAS with the recorded DAS28 to ascertain levels of agreement. The current transformation formula tended to overestimate the true DAS28 score, particularly at the higher end of the scale. A formula which uses all separate components of the DAS was found to estimate the scores with a higher level of precision. A new formula is proposed that can be used by other early RA cohorts to convert the original DAS to DAS28.Peer reviewedFinal Published versio

    Handling protest responses in contingent valuation surveys

    Get PDF
    OBJECTIVES: Protest responses, whereby respondents refuse to state the value they place on the health gain, are commonly encountered in contingent valuation (CV) studies, and they tend to be excluded from analyses. Such an approach will be biased if protesters differ from non-protesters on characteristics that predict their responses. The Heckman selection model has been commonly used to adjust for protesters, but its underlying assumptions may be implausible in this context. We present a multiple imputation (MI) approach to appropriately address protest responses in CV studies, and compare it with the Heckman selection model. METHODS: This study exploits data from the multinational EuroVaQ study, which surveyed respondents' willingness-to-pay (WTP) for a Quality Adjusted Life Year (QALY). Here, our simulation study assesses the relative performance of MI and Heckman selection models across different realistic settings grounded in the EuroVaQ study, including scenarios with different proportions of missing data and non-response mechanisms. We then illustrate the methods in the EuroVaQ study for estimating mean WTP for a QALY gain. RESULTS: We find that MI provides lower bias and mean squared error compared with the Heckman approach across all considered scenarios. The simulations suggest that the Heckman approach can lead to considerable underestimation or overestimation of mean WTP due to violations in the normality assumption, even after log-transforming the WTP responses. The case study illustrates that protesters are associated with a lower mean WTP for a QALY gain compared with non-protesters, but that the results differ according to method for handling protesters. CONCLUSIONS: MI is an appropriate method for addressing protest responses in CV studies

    Meta-analysis of continuous outcomes: using pseudo IPD created from aggregate data to adjust for baseline imbalance and assess treatment-by-baseline modification.

    Get PDF
    Meta-analysis of individual participant data (IPD) is considered the "gold-standard" for synthesizing clinical study evidence. However, gaining access to IPD can be a laborious task (if possible at all) and in practice only summary (aggregate) data are commonly available. In this work we focus on meta-analytic approaches of comparative studies where aggregate data are available for continuous outcomes measured at baseline (pre-treatment) and follow-up (post-treatment). We propose a method for constructing pseudo individual baselines and outcomes based on the aggregate data. These pseudo IPD can be subsequently analysed using standard analysis of covariance (ANCOVA) methods. Pseudo IPD for continuous outcomes reported at two timepoints can be generated using the sufficient statistics of an ANCOVA model i.e., the mean and standard deviation at baseline and follow-up per group, together with the correlation of the baseline and follow-up measurements. Applying the ANCOVA approach, which crucially adjusts for baseline imbalances and accounts for the correlation between baseline and change scores, to the pseudo IPD results in identical estimates to the ones obtained by an ANCOVA on the true IPD. In addition, an interaction term between baseline and treatment effect can be added. There are several modelling options available under this approach, which makes it very flexible. Methods are exemplified using reported data of a previously published IPD metaanalysis of 10 trials investigating the effect of antihypertensive treatments on systolic blood pressure, leading to identical results compared with the true IPD analysis and of a meta-analysis of fewer trials, where baseline imbalance occurred. This article is protected by copyright. All rights reserved

    The Effects of Virtual Reality on Procedural Pain and Anxiety in Pediatrics

    Get PDF
    Distraction and procedural preparation techniques are frequently used to manage pain and anxiety in children undergoing medical procedures. An increasing number of studies have indicated that Virtual Reality (VR) can be used to deliver these interventions, but treatment effects vary greatly. The present study is a systematic review and meta-analysis of studies that have used VR to reduce procedural pain and anxiety in children. It is the first meta-analytic assessment of the potential influence of technical specifications (immersion) and degree of user-system interactivity on treatment effects. 65 studies were identified, of which 42 reported pain outcomes and 35 reported anxiety outcomes. Results indicate large effect sizes in favor of VR for both outcomes. Larger effects were observed in dental studies and studies that used non-interactive VR. No relationship was found between the degree of immersion or participant age and treatment effects. Most studies were found to have a high risk of bias and there are strong indications of publication bias. The results and their implications are discussed in context of these limitations, and modified effect sizes are suggested. Finally, recommendations for future investigations are provided

    Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.

    Get PDF
    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data

    Multiple Imputation for Multilevel Data with Continuous and Binary Variables

    Get PDF
    We present and compare multiple imputation methods for multilevel continuous and binary data where variables are systematically and sporadically missing. The methods are compared from a theoretical point of view and through an extensive simulation study motivated by a real dataset comprising multiple studies. The comparisons show that these multiple imputation methods are the most appropriate to handle missing values in a multilevel setting and why their relative performances can vary according to the missing data pattern, the multilevel structure and the type of missing variables. This study shows that valid inferences can only be obtained if the dataset includes a large number of clusters. In addition, it highlights that heteroscedastic multiple imputation methods provide more accurate inferences than homoscedastic methods, which should be reserved for data with few individuals per cluster. Finally, guidelines are given to choose the most suitable multiple imputation method according to the structure of the data

    Handling missing continuous outcome data in a Bayesian network meta-analysis

    Get PDF
    Background: A Bayesian network meta-analysis (NMA) model is a statistical method aimed at estimating the relative effects of multiple interventions against the same disease. The method has recently gained prominence, leading to the synthesis of the evidence regarding rank probabilities for each treatment. In several cases, an NMA is performed excluding incomplete data of studies retrieved through a systematic review, resulting in a loss of precision and power.  Methods: There are several methods for handling missing or incomplete data in an NMA framework, especially for continuous outcomes. In certain cases, only baseline and follow-up measurements are available; in this framework, to obtain data regarding mean changes, it is necessary to consider the pre-post study correlation. In this context, in a Bayesian setting, several authors suggest imputation strategies for pre-post correlation. In other cases, a variability measure associated with a mean change score might be unavailable. Different imputation methods have been suggested, such as those based on maximum standard deviation imputation. The purpose of this study is to verify the robustness of Bayesian NMA models concerning different imputation strategies through simulations.  Results: Simulation results show that the bias is notably small for every scenario, confirming that rankings provided by models are robust concerning different imputation methods in several heterogeneity-correlation settings.  Conclusions: This NMA method seems to be more robust to missing data imputation when data reported in different studies are generated in a low-heterogeneity scenario. The NMA method seems to be more robust to missing value imputation if the expectation of the prior distribution, defined on the heterogeneity parameter, approaches the true value of the variability across studies.&nbsp

    Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review

    Get PDF
    Background: Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. Methods: We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. Results: For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Conclusions: Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses
    corecore