30 research outputs found

    Use of the Metropolis-Hastings Algorithm in the Calibration of a Patient Level Simulation of Prostate Cancer Screening

    Get PDF
    Designing cancer screening programmes requires an understanding of epidemiology, disease natural history and screening test characteristics. Many of these aspects of the decision problem are unobservable and data can only tell us about their joint uncertainty. A Metropolis-Hastings algorithm was used to calibrate a patient level simulation model of the natural history of prostate cancer to national cancer registry and international trial data. This method correctly represents the joint uncertainty amongst the model parameters by drawing efficiently from a high dimensional correlated parameter space. The calibration approach estimates the probability of developing prostate cancer, the rate of disease progression and sensitivity of the screening test. This is then used to estimate the impact of prostate cancer screening in the UK. This case study demonstrates that the Metropolis-Hastings approach to calibration can be used to appropriately characterise the uncertainty alongside computationally expensive simulation models

    A cost-effectiveness model of prostate cancer screening

    Get PDF
    Prostate cancer is the second most common cause of male cancer death in the UK, however due to the uncertainty around the health benefits and cost-effectiveness of a national screening programme, organised screening has not been adopted. A cost-effectiveness analysis was therefore conducted to examine the impact of a national prostate cancer screening programme on behalf of the UK National Screening Committee. A discrete event simulation model was developed to evaluate the use of the prostate specific antigen (PSA) blood test as a screening tool in the UK. The model comprises four parts: a disease natural history model which models the underlying disease itself; a calibration module which enables unobservable model parameters to be calibrated to observable data using a Metropolis-Hastings algorithm; a screening component which allows different screening options to be imposed on the population; and a resource impact model which calculates the resource implications of the alternative screening options. The model estimates incidence, lead time, over-detection, quality of life years (QALYs), mortality and resource implications of single and repeat screening strategies

    Simulation sample sizes for Monte Carlo partial EVPI calculations

    Get PDF
    Partial expected value of perfect information (EVPI) quantifies the value of removing uncertainty about unknown parameters in a decision model. EVPIs can be computed via Monte Carlo methods. An outer loop samples values of the parameters of interest, and an inner loop samples the remaining parameters from their conditional distribution. This nested Monte Carlo approach can result in biased estimates if small numbers of inner samples are used and can require a large number of model runs for accurate partial EVPI estimates. We present a simple algorithm to estimate the EVPI bias and confidence interval width for a specified number of inner and outer samples. The algorithm uses a relatively small number of model runs (we suggest approximately 600), is quick to compute, and can help determine how many outer and inner iterations are needed for a desired level of accuracy. We test our algorithm using three case studies. (C) 2010 Elsevier B.V. All rights reserved

    Cost-Effectiveness of Disease-Modifying Therapies in the Management of Multiple Sclerosis for the Medicare Population

    Get PDF
    AbstractObjectiveTo evaluate the cost-effectiveness of disease-modifying therapies (DMTs) for the management of multiple sclerosis (MS) compared to best supportive care in the United States.MethodsCost-effectiveness analysis was undertaken using a state transition model of disease natural history and the impact of DMTs for the representative Medicare beneficiary with MS. Costs and outcomes were evaluated from the health-care payer perspective using a 50-year time horizon. Natural history data were drawn from a longitudinal cohort study. The effectiveness of the DMTs was evaluated through a systematic review. Utility data were taken from a study of patients with clinically definite MS in Nova Scotia. Resource use and cost data were derived from the Sonya Slifka database and associated literature.ResultsWhen based on placebo-controlled evidence, the marginal cost-effectiveness of interferon beta (IFNβ) and glatiramer acetate compared to best supportive care is expected to be in excess of $100,000 per quality-adjusted life-year gained. When evidence from head-to-head trials is incorporated into the model, the cost-effectiveness of 6 MIU IFNβ-1a is expected to be considerably less favorable. Treatment discontinuation upon progression to Expanded Disability Status Scale 7.0 is expected to improve the cost-effectiveness of all DMTs.ConclusionsFurther research is required to examine the long-term clinical effectiveness and cost-effectiveness of these therapies. There is no definitive guidance in the United States concerning discontinuation of DMTs; this study suggests that the prudent use of a treatment discontinuation rule may considerably improve the cost-effectiveness of DMTs

    Designing and Undertaking a Health Economics Study of Digital Health Interventions.

    Get PDF
    This paper introduces and discusses key issues in the economic evaluation of digital health interventions. The purpose is to stimulate debate so that existing economic techniques may be refined or new methods developed. The paper does not seek to provide definitive guidance on appropriate methods of economic analysis for digital health interventions. This paper describes existing guides and analytic frameworks that have been suggested for the economic evaluation of healthcare interventions. Using selected examples of digital health interventions, it assesses how well existing guides and frameworks align to digital health interventions. It shows that digital health interventions may be best characterized as complex interventions in complex systems. Key features of complexity relate to intervention complexity, outcome complexity, and causal pathway complexity, with much of this driven by iterative intervention development over time and uncertainty regarding likely reach of the interventions among the relevant population. These characteristics imply that more-complex methods of economic evaluation are likely to be better able to capture fully the impact of the intervention on costs and benefits over the appropriate time horizon. This complexity includes wider measurement of costs and benefits, and a modeling framework that is able to capture dynamic interactions among the intervention, the population of interest, and the environment. The authors recommend that future research should develop and apply more-flexible modeling techniques to allow better prediction of the interdependency between interventions and important environmental influences.This paper is one of the outputs of two workshops, one supported by the Medical Research Council (MRC)/National Institute for Health Research (NIHR) Methodology Research Programme (PI Susan Michie) and the Robert Wood Johnson Foundation (PI Kevin Patrick), and the other by the National Science Foundation (PI Donna Spruitj-Metz, proposal # 1539846). The Health Economics Research Unit is funded in part by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates.This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by Elsevier

    Cost-effectiveness of screening for ovarian cancer amongst postmenopausal women: a model-based economic evaluation

    Get PDF
    Background The United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS) was the biggest ovarian cancer screening trial to date. A non-significant effect of screening on ovarian cancer was reported, but the authors noted a potential delayed effect of screening, and suggested the need for four years further follow-up. There are no UK-based cost-effectiveness analyses of ovarian cancer screening. Hence we assessed the lifetime outcomes associated with, and the cost-effectiveness of, screening for ovarian cancer in the UK, along with the value of further research. Methods We performed a model-based economic evaluation. Effectiveness data were taken from UKCTOCS, which considered strategies of multimodal screening (MMS), ultrasound screening (USS) and no screening. We conducted systematic reviews to identify the remaining model inputs, and performed a rigorous and transparent prospective evaluation of different methods for extrapolating the effect of screening on ovarian cancer mortality. We considered costs to the UK healthcare system and measured effectiveness using quality-adjusted life years (QALYs). We used value of information methods to estimate the value of further research. Results Over a lifetime, MMS and USS were estimated to be both more expensive and more effective than no screening. USS was dominated by MMS, being both more expensive and less effective. Compared with no screening, MMS cost on average £419 more (95% confidence interval £255 to £578), and generated 0.047 more QALYs (0.002 to 0.088). The incremental cost-effectiveness ratio (ICER) comparing MMS with no screening was £8864 per QALY (£2600 to £51,576). Alternative extrapolation methods increased the ICER, with the highest value being £36,769 (£13,888 to dominated by no screening). Using the UKCTOCS trial horizon, both MMS and USS were dominated by no screening, as they produced fewer QALYs at a greater cost. The value of research into eliminating all uncertainty in long-term effectiveness was estimated to be worth up to £20 million, or approximately £5 million for four years follow-up. Conclusions Screening for ovarian cancer with MMS is both more effective and more expensive than not screening. Compared to national willingness to pay thresholds, lifetime cost-effectiveness is promising, but there remains considerable uncertainty regarding extrapolated long-term effectiveness

    Calculating partial expected value of perfect information via Monte Carlo sampling algorithms

    Get PDF
    Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities

    Outcomes of aortic aneurysm surgery in England : a nationwide cohort study using hospital admissions data from 2002 to 2015

    Get PDF
    Background The United Kingdom aortic aneurysms (AA) services have undergone reconfiguration to improve outcomes. The National Health Service collects data on all hospital admissions in England. The complex administrative datasets generated have the potential to be used to monitor activity and outcomes, however, there are challenges in using these data as they are primarily collected for administrative purposes. The aim of this study was to develop standardised algorithms with the support of a clinical consensus group to identify all AA activity, classify the AA management into clinically meaningful case mix groups and define outcome measures that could be used to compare outcomes among AA service providers. \ud Methods In-patient data about aortic aneurysm (AA) admissions from the 2002/03 to 2014/15 were acquired. A stepwise approach, with input from a clinical consensus group, was used to identify relevant cases. The data is primarily coded into episodes, these were amalgamated to identify admissions; admissions were linked to understand patient pathways and index admissions. Cases were then divided into case-mix groups based upon examination of individually sampled and aggregate data. Consistent measures of outcome were developed, including length of stay, complications within the index admission, post-operative mortality and re-admission. Results Several issues were identified in the dataset including potential conflict in identifying emergency and elective cases and potential confusion if an inappropriate admission definition is used. Ninety six thousand seven hundred thirty-five patients were identified using the algorithms developed in this study to extract AA cases from Hospital episode statistics. From 2002 to 2015, 83,968 patients (87% of all cases identified) underwent repair for AA and 12,767 patients (13% of all cases identified) died in hospital without any AA repair. Six thousand three hundred twenty-nine patients (7.5%) had repair for complex AA and 77,639 (92.5%) had repair for infra-renal AA. Conclusion The proposed methods define homogeneous clinical groups and outcomes by combining administrative codes in the data. These methodologically robust methods can help examine outcomes associated with previous and current service provisions and aid future reconfiguration of aortic aneurysm surgery services
    corecore