88 research outputs found

    Automated, receptive, interactive: a classroom-based data generation exercise

    Get PDF
    It is easier to engage with statistics training when presented with examples from familiar subject areas. However, when teaching students of varying professional backgrounds, finding relatable examples can be especially challenging. Classroom-based data generation exercises offer a solution with students involved in the process from data collection through to choice and use of appropriate analyses. One such exercise that forms an integral part of an introductory statistics course is based on beermat (coaster) flipping, a popular pub game in the UK. We recently moved the data collection process online allowing students to enter data via smartphones. Furthermore, a web application has been developed using the shiny package in R. This application automizes data analysis and allows students to explore the results interactively and independently. The application comes to life with visual demonstrations of core concepts such as the central limit theorem and bootstrapping. This technology further engages students and the ensuing discussion comparing outputs and interpretation is a welcome addition to classroom interactivity. We present details of this exercise, focussing on use of the web application, example outputs, student feedback and guidance for best practice to maximise learning outcomes

    Estimating the heterogeneity variance in a random-effects meta-analysis [Vol. 1]

    Get PDF
    In a meta-analysis, differences in the design and conduct of studies may cause variation in effects beyond what is expected from chance alone. This additional variation is commonly known as heterogeneity, which is incorporated into a random-effects model. The heterogeneity variance parameter in this model is commonly estimated by the DerSimonian-Laird method, despite being shown to produce negatively biased estimates in simulated data. Many other methods have been proposed, but there has been less research into their properties. This thesis compares all methods to estimate the heterogeneity variance in both empirical and simulated meta-analysis data. First, methods are compared in 12,894 empirical meta-analyses from the Cochrane Database of Systematic Reviews (CDSR). These results showed high discordance in estimates of the heterogeneity variance between methods, so investigating their properties in simulated meta-analysis data is worthwhile. A systematic review of relevant simulation studies was then conducted and identified 12 studies, but there was little consensus between them and conclusions could only be considered tentative. A new simulation study was conducted in collaboration with other statisticians. Results confirmed that the DerSimonian-Laird method is negatively biased in scenarios where within-study variances are imprecise and/or biased. On the basis of these results, the REML approach to heterogeneity variance estimation is recommended. A secondary analysis combines simulated and empirical meta-analysis data and shows all methods usually have poor properties in practice; only marginal improvements are possible using REML. In conclusion, caution is advised when interpreting estimates of the heterogeneity variance and confidence intervals should always be presented to express its uncertainty. More promisingly, the Hartung-Knapp confidence interval method is robust to poor heterogeneity variance estimates, so sensitivity analysis is not usually required for inference on the mean effect

    Estimating the Heterogeneity Variance in a Random-Effects Meta-Analysis

    Get PDF
    In a meta-analysis, differences in the design and conduct of studies may cause variation in effects beyond what is expected from chance alone. This additional variation is commonly known as heterogeneity, which is incorporated into a random-effects model. The heterogeneity variance parameter in this model is commonly estimated by the DerSimonian-Laird method, despite being shown to produce negatively biased estimates in simulated data. Many other methods have been proposed, but there has been less research into their properties. This thesis compares all methods to estimate the heterogeneity variance in both empirical and simulated meta-analysis data. First, methods are compared in 12,894 empirical meta-analyses from the Cochrane Database of Systematic Reviews (CDSR). These results showed high discordance in estimates of the heterogeneity variance between methods, so investigating their properties in simulated meta-analysis data is worthwhile. A systematic review of relevant simulation studies was then conducted and identified 12 studies, but there was little consensus between them and conclusions could only be considered tentative. A new simulation study was conducted in collaboration with other statisticians. Results confirmed that the DerSimonian-Laird method is negatively biased in scenarios where within-study variances are imprecise and/or biased. On the basis of these results, the REML approach to heterogeneity variance estimation is recommended. A secondary analysis combines simulated and empirical meta-analysis data and shows all methods usually have poor properties in practice; only marginal improvements are possible using REML. In conclusion, caution is advised when interpreting estimates of the heterogeneity variance and confidence intervals should always be presented to express its uncertainty. More promisingly, the Hartung-Knapp confidence interval method is robust to poor heterogeneity variance estimates, so sensitivity analysis is not usually required for inference on the mean effect

    Missing data in randomized controlled trials testing palliative interventions pose a significant risk of bias and loss of power: a systematic review and meta-analyses

    Get PDF
    Objectives To assess the risk posed by missing data (MD) to the power and validity of trials evaluating palliative interventions. Study Design and Setting A systematic review of MD in published randomized controlled trials (RCTs) of palliative interventions in participants with life-limiting illnesses was conducted, and random-effects meta-analyses and metaregression were performed. CENTRAL, MEDLINE, and EMBASE (2009-2014) were searched with no language restrictions. Results One hundred and eight RCTs representing 15,560 patients were included. The weighted estimate for MD at the primary endpoint was 23.1% (95% confidence interval [CI] 19.3, 27.4). Larger MD proportions were associated with increasing numbers of questions/tests requested (odds ratio [OR] , 1.19; 95% CI 1.05, 1.35) and with longer study duration (OR, 1.09; 95% CI 1.02, 1.17). Meta-analysis found evidence of differential rates of MD between trial arms, which varied in direction (OR, 1.04; 95% CI 0.90, 1.20; I 2 35.9, P = 0.001). Despite randomization, MD in the intervention arms (vs. control) were more likely to be attributed to disease progression unrelated to the intervention (OR, 1.31; 95% CI 1.02, 1.69). This was not the case for MD due to death (OR, 0.92; 95% CI 0.78, 1.08). Conclusion The overall proportion and differential rates and reasons for MD reduce the power and potentially introduce bias to palliative care trials

    Graphical augmentations to the funnel plot assess the impact of additional evidence on a meta-analysis

    Get PDF
    AbstractObjectiveWe aim to illustrate the potential impact of a new study on a meta-analysis, which gives an indication of the robustness of the meta-analysis.Study Design and SettingA number of augmentations are proposed to one of the most widely used of graphical displays, the funnel plot. Namely, 1) statistical significance contours, which define regions of the funnel plot in which a new study would have to be located to change the statistical significance of the meta-analysis; and 2) heterogeneity contours, which show how a new study would affect the extent of heterogeneity in a given meta-analysis. Several other features are also described, and the use of multiple features simultaneously is considered.ResultsThe statistical significance contours suggest that one additional study, no matter how large, may have a very limited impact on the statistical significance of a meta-analysis. The heterogeneity contours illustrate that one outlying study can increase the level of heterogeneity dramatically.ConclusionThe additional features of the funnel plot have applications including 1) informing sample size calculations for the design of future studies eligible for inclusion in the meta-analysis; and 2) informing the updating prioritization of a portfolio of meta-analyses such as those prepared by the Cochrane Collaboration

    Methods to estimate the between-study variance and its uncertainty in meta-analysis

    Get PDF
    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd

    Quality of missing data reporting and handling in palliative care trials demonstrates that further development of the CONSORT statement is required: a systematic review.

    Get PDF
    OBJECTIVES: Assess (i) the quality of reporting and handling of missing data (MD) in palliative care trials, (ii) whether there are differences in the reporting of criteria specified by the Consolidated Standards of Reporting Trials (CONSORT) 2010 statement compared with those not specified, and (iii) the association of the reporting of MD with journal impact factor and CONSORT endorsement status. STUDY DESIGN AND SETTING: Systematic review of palliative care randomized controlled trials. CENTRAL, MEDLINE, and EMBASE (2009-2014) were searched. RESULTS: One hundred and eight trials (15,560 participants) were included. MD was incompletely reported and not handled in accordance with current guidance. Reporting criteria specified by the CONSORT statement were better reported than those not specified (participant flow, 69%; number of participants not included in the primary outcome analysis, 94%; and the reason for MD, 71%). However, MD in items contributing to scale summaries (10%) and secondary outcomes (9%) were poorly reported, so the proportion of MD stated is likely to be an underestimate. The reason for MD provided was unclear for 54% of participants and only 16% of trials with MD reported a MD sensitivity analysis. The odds of reporting most of the MD and other risk of bias reporting criteria were increased as the journal impact factor increased and in journals that endorsed the CONSORT statement. CONCLUSION: Further development of the CONSORT MD reporting guidance is likely to improve the quality of reporting. Reporting recommendations are provided

    Methods to calculate uncertainty in the estimated overall effect size from a random-effects meta-analysis

    Get PDF
    Meta-analyses are an important tool within systematic reviews to estimate the overall effect size and its confidence interval for an outcome of interest. If heterogeneity between the results of the relevant studies is anticipated, then a random-effects model is often preferred for analysis. In this model, a prediction interval for the true effect in a new study also provides additional useful information. However, the DerSimonian and Laird method - frequently used as the default method for meta-analyses with random effects - has been long challenged due to its unfavourable statistical properties. Several alternative methods have been proposed that may have better statistical properties in specific scenarios. In this paper, we aim to provide a comprehensive overview of available methods for calculating point estimates, confidence intervals and prediction intervals for the overall effect size under the random-effects model. We indicate whether some methods are preferable than others by considering the results of comparative simulation and real-life data studies

    An Assessment of the UK’s Trade with Developing Countries under the Generalised System of Preferences

    Get PDF
    The European Union (EU) Generalised System of Preferences (GSP Scheme) grants preferential treatment to 88 eligible countries. There are, however, concerns that the restrictive features (such as Rules of Origin, Low Preference Margin and Low Coverage) of the existing scheme indicate gravitation towards commercial trade agenda to which efficiency imperatives appear subordinated. Whether these concerns are genuine is an empirical question whose answer largely determines whether, after Brexit, the UK continues with the existing specifics of the EU scheme or develops a more inclusive UK-specific GSP framework. This study quantitatively examines the efficiency of the EU GSP as it relates to UK beneficiaries from 2014 to 2017. We draw on the descriptive efficiency estimation (The utilisation Rate, Potential Coverage Rate, and the Utility Rate) using import data across 88 beneficiary countries and agricultural products of the Harmonised System Code Chapter 1 to 24. Asides the Rules of Origin that, generally, harm the uptake of GSP, low preference margin is found to cause low utilisation rates in a non-linear manner. Essentially, a more robust option (such that allows “global Cumulation” or broader product coverage) could, substantially, lower the existing barriers to trade and upsurge the efficiency of the GSP scheme
    • 

    corecore