50 research outputs found

    Bayesian methods for modelling non-random missing data mechanisms in longitudinal studies

    No full text
    In longitudinal studies, data are collected on a group of individuals over a period of time, and inevitably this data will contain missing values. Assuming that this missingness follows convenient `random- like' patterns may not be realistic, so there is much interest in methods for analysing incomplete longitudinal data which allow the incorporation of more realistic assumptions about the missing data mechanism. We explore the use of Bayesian full probability modelling in this context, which involves the specification of a joint model including a model for the question of interest and a model for the missing data mechanism. Using simulated data with missing outcomes generated by an informative missingness mechanism, we start by investigating the circumstances and the extent to which Bayesian methods can improve parameter estimates and model fit compared to complete-case analysis. This includes examining the impact of misspecifying different parts of the model. With real datasets, when the form of the missingness is unknown, a diagnostic that indicates the amount of information in the missing data given our model assumptions would be useful. pD is a measure of the dimensionality of a Bayesian model, and we explore its use and limitations for this purpose. Bayesian full probability modelling is then used in more complex settings, using real examples of longitudinal data taken from the British birth cohort studies and a clinical trial, some of which have missing covariates. We look at ways of incorporating information from additional sources into our models to help parameter estimation, including data from other studies and knowledge elicited from an expert. Additionally, we assess the sensitivity of the conclusions regarding the question of interest to varying the assumptions in different parts of the joint model, explore ways of presenting this information, and outline a strategy for Bayesian modelling of non-ignorable missing data

    Handling Missing Data in Within-Trial Cost-Effectiveness Analysis: A Review with Future Recommendations.

    Get PDF
    Cost-effectiveness analyses (CEAs) alongside randomised controlled trials (RCTs) are increasingly designed to collect resource use and preference-based health status data for the purpose of healthcare technology assessment. However, because of the way these measures are collected, they are prone to missing data, which can ultimately affect the decision of whether an intervention is good value for money. We examine how missing cost and effect outcome data are handled in RCT-based CEAs, complementing a previous review (covering 2003-2009, 88 articles) with a new systematic review (2009-2015, 81 articles) focussing on two different perspectives. First, we provide guidelines on how the information about missingness and related methods should be presented to improve the reporting and handling of missing data. We propose to address this issue by means of a quality evaluation scheme, providing a structured approach that can be used to guide the collection of information, elicitation of the assumptions, choice of methods and considerations of possible limitations of the given missingness problem. Second, we review the description of the missing data, the statistical methods used to deal with them and the quality of the judgement underpinning the choice of these methods. Our review shows that missing data in within-RCT CEAs are still often inadequately handled and the overall level of information provided to support the chosen methods is rarely satisfactory

    Erratum to: Handling Missing Data in Within-Trial Cost-Effectiveness Analysis: A Review with Future Recommendations.

    Get PDF
    Reference 5, which reads: 5. Manca P, Palmer S. Handling missing values in cost effectiveness analyses that use data from cluster randomized trials. Appl Health Econ Health Policy. 2006;4:65ā€“75. Should read: 5. Manca A, Palmer S. Handling missing data in patientlevel cost-effectiveness analysis alongside randomised clinical trials. Appl Health Econ Health Policy. 2005;4:65ā€“75

    A full Bayesian model to handle structural ones and missingness in economic evaluations from individualā€level data

    Get PDF
    Economic evaluations from individualā€level data are an important component of the process of technology appraisal, with a view to informing resource allocation decisions. A critical problem in these analyses is that both effectiveness and cost data typically present some complexity (eg, nonnormality, spikes, and missingness) that should be addressed using appropriate methods. However, in routine analyses, standardised approaches are typically used, possibly leading to biassed inferences. We present a general Bayesian framework that can handle the complexity. We show the benefits of using our approach with a motivating example, the MenSS trial, for which there are spikes at one in the effectiveness and missingness in both outcomes. We contrast a set of increasingly complex models and perform sensitivity analysis to assess the robustness of the conclusions to a range of plausible missingness assumptions. We demonstrate the flexibility of our approach with a second example, the PBS trial, and extend the framework to accommodate the characteristics of the data in this study. This paper highlights the importance of adopting a comprehensive modelling approach to economic evaluations and the strategic advantages of building these complex models within a Bayesian framework

    Are large randomised controlled trials in severe sepsis and septic shock statistically disadvantaged by repeated inadvertent underestimates of required sample size?

    Get PDF
    OBJECTIVES: We sought to understand why randomised controlled trials in septic shock have failed to demonstrate effectiveness in the face of improving overall outcomes for patients and seemingly promising results of early phase trials of interventions. DESIGN: We performed a retrospective analysis of large critical care trials of severe sepsis and septic shock. Data were collected from the primary trial manuscripts, prepublished statistical plans or by direct communication with corresponding authors. SETTING: Critical care randomised control trials in severe sepsis and septic shock. PARTICIPANTS: 14ā€‰619 patients randomised in 13 trials published between 2005 and 2015, enrolling greater than 500 patients and powered to a primary outcome of mortality. INTERVENTION: Multiple interventions including the evaluation of treatment strategies and novel therapeutics. PRIMARY AND SECONDARY OUTCOME MEASURES: Our primary outcome measure was the difference between the anticipated and actual control arm mortality. Secondary analysis examined the actual effect size and the anticipated effect size employed in sample size calculation. RESULTS: In this post hoc analysis of 13 trials with 14 619 patients randomised, we highlight a global tendency to overestimate control arm mortality in estimating sample size (absolute difference 9.8%, 95% CI -14.7% to -5.0%, p<0.001). When we compared anticipated and actual effect size of a treatment, there was also a substantial overestimation in proposed values (absolute difference 7.4%, 95% CI -9.0% to -5.8%, p<0.0001). CONCLUSIONS: An interpretation of our results is that trials are consistently underpowered in the planning phase by employing erroneous variables to calculate a satisfactory sample size. Our analysis cannot establish if, given a larger sample size, a trial would have had a positive result. It is disappointing so many promising phase II results have not translated into durable phase III outcomes. It is possible that our current framework has biased us towards discounting potentially life-saving treatments

    Using DIC to compare selection models with non-ignorable missing responses

    Get PDF
    Data with missing responses generated by a non-ignorable missingness mechanism can be analysed by jointly modelling the response and a binary variable indicating whether the response is observed or missing. Using a selection model factorisation, the resulting joint model consists of a model of interest and a model of missingness. In the case of non-ignorable missingness, model choice is difficult because the assumptions about the missingness model are never verifiable from the data at hand. For complete data, the Deviance Information Criterion (DIC) is routinely used for Bayesian model comparison. However, when an analysis includes missing data, DIC can be constructed in different ways and its use and interpretation are not straightforward. In this paper, we present a strategy for comparing selection models by combining information from two measures taken from different constructions of the DIC. A DIC based on the observed data likelihood is used to compare joint models with different models of interest but the same model of missingness, and a comparison of models with the same model of interest but different models of missingness is carried out using the model of missingness part of a conditional DIC. This strategy is intended for use within a sensitivity analysis that explores the impact of different assumptions about the two parts of the model, and is illustrated by examples with simulated missingness and an application which compares three treatments for depression using data from a clinical trial. We also examine issues relating to the calculation of the DIC based on the observed data likelihood
    corecore