28 research outputs found

    A Guide to Handling Missing Data in Cost-Effectiveness Analysis Conducted Within Randomised Controlled Trials

    Get PDF
    The authors would like to thank Professor Adrian Grant and the team at the University of Aberdeen (Professor Craig Ramsay, Janice Cruden, Charles Boachie, Professor Marion Campbell and Seonaidh Cotton) who kindly allowed the REFLUX dataset to be used for this work, and Eldon Spackman for kindly sharing the Stata (R) code for calculating the probability that an intervention is cost effective following MI. The authors are grateful to the reviewers for their comments, which greatly improved this paper. M. G. is recipient of a Medical Research Council Early Career Fellowship in Economics of Health (grant number: MR/K02177X/1). I. R. W. was supported by the Medical Research Council [Unit Programme U105260558]. No specific funding was obtained to produce this paper. The authors declare no conflicts of interest.Missing data are a frequent problem in cost-effectiveness analysis (CEA) within a randomised controlled trial. Inappropriate methods to handle missing data can lead to misleading results and ultimately can affect the decision of whether an intervention is good value for money. This article provides practical guidance on how to handle missing data in within-trial CEAs following a principled approach: (i) the analysis should be based on a plausible assumption for the missing data mechanism, i.e. whether the probability that data are missing is independent of or dependent on the observed and/or unobserved values; (ii) the method chosen for the base-case should fit with the assumed mechanism; and (iii) sensitivity analysis should be conducted to explore to what extent the results change with the assumption made. This approach is implemented in three stages, which are described in detail: (1) descriptive analysis to inform the assumption on the missing data mechanism; (2) how to choose between alternative methods given their underlying assumptions; and (3) methods for sensitivity analysis. The case study illustrates how to apply this approach in practice, including software code. The article concludes with recommendations for practice and suggestions for future research.Medical Research Council Early Career Fellowship in Economics of Health MR/K02177X/1Medical Research Council UK (MRC) U105260558Medical Research Council UK (MRC) MC_U105260558 MR/K02177X/

    Sensitivity Analysis for Not-at-Random Missing Data in Trial-Based Cost-Effectiveness Analysis : A Tutorial

    Get PDF
    Cost-effectiveness analyses (CEA) of randomised controlled trials are a key source of information for health care decision makers. Missing data are, however, a common issue that can seriously undermine their validity. A major concern is that the chance of data being missing may be directly linked to the unobserved value itself [missing not at random (MNAR)]. For example, patients with poorer health may be less likely to complete quality-of-life questionnaires. However, the extent to which this occurs cannot be ascertained from the data at hand. Guidelines recommend conducting sensitivity analyses to assess the robustness of conclusions to plausible MNAR assumptions, but this is rarely done in practice, possibly because of a lack of practical guidance. This tutorial aims to address this by presenting an accessible framework and practical guidance for conducting sensitivity analysis for MNAR data in trial-based CEA. We review some of the methods for conducting sensitivity analysis, but focus on one particularly accessible approach, where the data are multiply-imputed and then modified to reflect plausible MNAR scenarios. We illustrate the implementation of this approach on a weight-loss trial, providing the software code. We then explore further issues around its use in practice

    Evaluation of a weighting approach for performing sensitivity analysis after multiple imputation

    Get PDF
    Abstract Background Multiple imputation (MI) is a well-recognised statistical technique for handling missing data. As usually implemented in standard statistical software, MI assumes that data are ‘Missing at random’ (MAR); an assumption that in many settings is implausible. It is not possible to distinguish whether data are MAR or ‘Missing not at random’ (MNAR) using the observed data, so it is desirable to discover the impact of departures from the MAR assumption on the MI results by conducting sensitivity analyses. A weighting approach based on a selection model has been proposed for performing MNAR analyses to assess the robustness of results obtained under standard MI to departures from MAR. Methods In this article, we use simulation to evaluate the weighting approach as a method for exploring possible departures from MAR, with missingness in a single variable, where the parameters of interest are the marginal mean (and probability) of a partially observed outcome variable and a measure of association between the outcome and a fully observed exposure. The simulation studies compare the weighting-based MNAR estimates for various numbers of imputations in small and large samples, for moderate to large magnitudes of departure from MAR, where the degree of departure from MAR was assumed known. Further, we evaluated a proposed graphical method, which uses the dataset with missing data, for obtaining a plausible range of values for the parameter that quantifies the magnitude of departure from MAR. Results Our simulation studies confirm that the weighting approach outperformed the MAR approach, but it still suffered from bias. In particular, our findings demonstrate that the weighting approach provides biased parameter estimates, even when a large number of imputations is performed. In the examples presented, the graphical approach for selecting a range of values for the possible departures from MAR did not capture the true parameter value of departure used in generating the data. Conclusions Overall, the weighting approach is not recommended for sensitivity analyses following MI, and further research is required to develop more appropriate methods to perform such sensitivity analyses

    The Smart City Active Mobile Phone Intervention (SCAMPI) study to promote physical activity through active transportation in healthy adults: a study protocol for a randomised controlled trial

    Full text link
    Abstract Background The global pandemic of physical inactivity represents a considerable public health challenge. Active transportation (i.e., walking or cycling for transport) can contribute to greater total physical activity levels. Mobile phone-based programs can promote behaviour change, but no study has evaluated whether such a program can promote active transportation in adults. This study protocol presents the design and methodology of The Smart City Active Mobile Phone Intervention (SCAMPI), a randomised controlled trial to promote active transportation via a smartphone application (app) with the aim to increase physical activity. Methods/design A two-arm parallel randomised controlled trial will be conducted in Stockholm County, Sweden. Two hundred fifty adults aged 20–65 years will be randomised to either monitoring of active transport via the TRavelVU app (control), or to a 3-month evidence-based behaviour change program to promote active transport and monitoring of active travel via the TRavelVU Plus app (intervention). The primary outcome is moderate-to-vigorous intensity physical activity (MVPA in minutes/day) (ActiGraph wGT3x-BT) measured post intervention. Secondary outcomes include: time spent in active transportation measured via the TRavelVU app, perceptions about active transportation (the Transport and Physical Activity Questionnaire (TPAQ)) and health related quality of life (RAND-36). Assessments are conducted at baseline, after the completed intervention (after 3 months) and 6 months post randomisation. Discussion SCAMPI will determine the effectiveness of a smartphone app to promote active transportation and physical activity in an adult population. If effective, the app has potential to be a low-cost intervention that can be delivered at scale. Trial registration ClinicalTrials.gov NCT03086837; 22 March, 2017
    corecore