16 research outputs found

    Programme evaluation of population and system level policies : evidence for decision-making

    Get PDF
    Introduction Policy evaluations often focus on ex-post estimation of causal effects on short-term surrogate outcomes. The value of such information is limited for decision-making as the failure to reflect policy-relevant outcomes and disregard for opportunity costs prohibits the assessment of value for money. Further, these evaluations do not always consider all relevant evidence, other courses of action and the decision uncertainty. Methods In this paper, we explore how policy evaluation could better meet the needs of decision-making. We begin by defining the evidence required to inform decision making. We then conduct a literature review of challenges in evaluating policies. Finally, we highlight potential methods available to help address these challenges. Results The evidence required to inform decision-making includes the impacts on the policy-relevant outcomes, the costs and associated opportunity costs, and the consequences of uncertainty. Challenges in evaluating health policies are described using 8 categories: i) valuation space, ii) comparators, iii) time of evaluation, iv) mechanisms of action, v) effects, vi) resources, constraints and opportunity costs, vii) fidelity, adaptation and level of implementation, and viii) generalisability and external validity. Methods from a broad set of disciplines are available to improve policy evaluation, relating to causal inference, decision-analytic modelling, theory of change, realist evaluation and structured expert elicitation. Limitations The targeted review may not identify all possible challenges and the methods covered are not exhaustive. Conclusions Evaluations should provide appropriate evidence to inform decision making. There are challenges in evaluating policies but methods from multiple disciplines are available to address these challenges. Implications Evaluators need to carefully consider the decision being informed, the necessary evidence to inform it and the appropriate methods

    Estimating the Economic Value of Automated Virtual Reality Cognitive Therapy for Treating Agoraphobic Avoidance in Patients With Psychosis: Findings From the gameChange Randomized Controlled Clinical Trial

    Get PDF
    Background:An automated virtual reality cognitive therapy (gameChange) has demonstrated its effectiveness to treat agoraphobia in patients with psychosis, especially for high or severe anxious avoidance. Its economic value to the health care system is not yet established.Objective:In this study, we aimed to estimate the potential economic value of gameChange for the UK National Health Service (NHS) and establish the maximum cost-effective price per patient.Methods:Using data from a randomized controlled trial with 346 patients with psychosis (ISRCTN17308399), we estimated differences in health-related quality of life, health and social care costs, and wider societal costs for patients receiving virtual reality therapy in addition to treatment as usual compared with treatment as usual alone. The maximum cost-effective prices of gameChange were calculated based on UK cost-effectiveness thresholds. The sensitivity of the results to analytical assumptions was tested.Results:Patients allocated to gameChange reported higher quality-adjusted life years (0.008 QALYs, 95% CI –0.010 to 0.026) and lower NHS and social care costs (–£105, 95% CI –£1135 to £924) compared with treatment as usual (£1=US $1.28); however, these differences were not statistically significant. gameChange was estimated to be worth up to £341 per patient from an NHS and social care (NHS and personal social services) perspective or £1967 per patient from a wider societal perspective. In patients with high or severe anxious avoidance, maximum cost-effective prices rose to £877 and £3073 per patient from an NHS and personal social services perspective and societal perspective, respectively.Conclusions:gameChange is a promising, cost-effective intervention for the UK NHS and is particularly valuable for patients with high or severe anxious avoidance. This presents an opportunity to expand cost-effective psychological treatment coverage for a population with significant health needs.Trial Registration:ISRCTN Registry ISRCTN17308399; https://www.isrctn.com/ISRCTN1730839

    Automated virtual reality therapy to treat agoraphobic avoidance and distress in patients with psychosis (gameChange): a multicentre, parallel-group, single-blind, randomised, controlled trial in England with mediation and moderation analyses

    Get PDF
    BackgroundAutomated delivery of psychological therapy using immersive technologies such as virtual reality (VR) might greatly increase the availability of effective help for patients. We aimed to evaluate the efficacy of an automated VR cognitive therapy (gameChange) to treat avoidance and distress in patients with psychosis, and to analyse how and in whom it might work.MethodsWe did a parallel-group, single-blind, randomised, controlled trial across nine National Health Service trusts in England. Eligible patients were aged 16 years or older, with a clinical diagnosis of a schizophrenia spectrum disorder or an affective diagnosis with psychotic symptoms, and had self-reported difficulties going outside due to anxiety. Patients were randomly assigned (1:1) to either gameChange VR therapy plus usual care or usual care alone, using a permuted blocks algorithm with randomly varying block size, stratified by study site and service type. gameChange VR therapy was provided in approximately six sessions over 6 weeks. Trial assessors were masked to group allocation. Outcomes were assessed at 0, 6 (primary endpoint), and 26 weeks after randomisation. The primary outcome was avoidance of, and distress in, everyday situations, assessed using the self-reported Oxford Agoraphobic Avoidance Scale (O-AS). Outcome analyses were done in the intention-to-treat population (ie, all participants who were assigned to a study group for whom data were available). We performed planned mediation and moderation analyses to test the effects of gameChange VR therapy when added to usual care. This trial is registered with the ISRCTN registry, 17308399.FindingsBetween July 25, 2019, and May 7, 2021 (with a pause in recruitment from March 16, 2020, to Sept 14, 2020, due to COVID-19 pandemic restrictions), 551 patients were assessed for eligibility and 346 were enrolled. 231 (67%) patients were men and 111 (32%) were women, 294 (85%) were White, and the mean age was 37·2 years (SD 12·5). 174 patients were randomly assigned to the gameChange VR therapy group and 172 to the usual care alone group. Compared with the usual care alone group, the gameChange VR therapy group had significant reductions in agoraphobic avoidance (O-AS adjusted mean difference –0·47, 95% CI –0·88 to –0·06; n=320; Cohen's d –0·18; p=0·026) and distress (–4·33, –7·78 to –0·87; n=322; –0·26; p=0·014) at 6 weeks. Reductions in threat cognitions and within-situation defence behaviours mediated treatment outcomes. The greater the severity of anxious fears and avoidance, the greater the treatment benefits. There was no significant difference in the occurrence of serious adverse events between the gameChange VR therapy group (12 events in nine patients) and the usual care alone group (eight events in seven patients; p=0·37).InterpretationAutomated VR therapy led to significant reductions in anxious avoidance of, and distress in, everyday situations compared with usual care alone. The mediation analysis indicated that the VR therapy worked in accordance with the cognitive model by reducing anxious thoughts and associated protective behaviours. The moderation analysis indicated that the VR therapy particularly benefited patients with severe agoraphobic avoidance, such as not being able to leave the home unaccompanied. gameChange VR therapy has the potential to increase the provision of effective psychological therapy for psychosis, particularly for patients who find it difficult to leave their home, visit local amenities, or use public transport.FundingNational Institute of Health Research Invention for Innovation programme, National Institute of Health Research Oxford Health Biomedical Research Centre

    Cost-effectiveness of point-of-care creatinine testing to assess kidney function prior to contrast-enhanced computed tomography imaging

    No full text
    BACKGROUND: Patients undergoing contrast-enhanced computed tomography (CECT) imaging in a non-emergency outpatient setting often lack a recent estimated Glomerular Filtration Rate measurement. This may lead to inefficiencies in the CECT pathway. The use of point-of-care (POC) creatinine tests to evaluate kidney function in these patients may provide a safe and cost-effective alternative to current practice, as these can provide results within the same CECT appointment. METHODS: A decision tree model was developed to characterise the diagnostic pathway and patient management (e.g., intravenous hydration) and link these to adverse renal events associated with intravenous contrast media. Twelve diagnostic strategies including three POC devices (i-STAT, ABL800 Flex and StatSensor), risk factor screening and laboratory testing were compared with current practice. The diagnostic accuracy of POC devices was derived from a systematic review and meta-analysis; relevant literature sources and databases informed other parameters. The cost-effective strategy from a health care perspective was identified based on highest net health benefit (NHB) which were expressed in quality-adjusted life years (QALYs) at £20,000/QALY. RESULTS: The cost-effective strategy, with a NHB of 9.98 QALYs and a probability of being cost-effective of 79.3%, was identified in our analysis to be a testing sequence involving screening all individuals for risk factors, POC testing (with i-STAT) on those screening positive, and performing a confirmatory laboratory test for individuals with a positive POC result. The incremental NHB of this strategy compared to current practice, confirmatory laboratory test, is 0.004 QALYs. Results were generally robust to scenario analysis. CONCLUSIONS: A testing sequence combining a risk factor questionnaire, POC test and confirmatory laboratory testing appears to be cost-effective compared to current practice. The cost-effectiveness of POC testing appears to be driven by reduced delays within the CECT pathway. The contribution of intravenous contrast media to acute kidney injury, and the benefits and harms of intravenous hydration remain uncertain

    The Potential Value of Clustering-Based Type 2 Diabetes Subgroups for Guiding Intensive Treatment: A Maximum Cost-Based Comparison with Threshold-Based Classifications

    No full text
    Aim: To quantify and compare the performance of setting priority for intensive treatment by novel clustering-based subgroups to threshold-based classifications using systematic coronary risk evaluation (SCORE) and HbA1c levels. Methods: We divided 2,935 patients from the Hoorn diabetes care system cohort into five clustering-based subgroups: severe insulin-deficient (SIDD) , severe insulin-resistant (SIRD) , mild obesity-related (MOD) mild (MD) , and mild with high HDL-cholesterol (MDH) diabetes, and four risk threshold-based subgroups. The United Kingdom Prospective Diabetes Study Outcomes Model was used to simulate lifetime health outcomes in the U.S. and U.K. settings. Gains from hypothetical treatment scenarios based on clinical guidelines were compared to “care-as-usual” and expressed in incremental quality-adjusted life-expectancy (QALE) and complication costs. Results: The SIRD and MOD subgroup had the lowest absolute and age-sex-standardized QALE, respectively (7.90; 9.07) . Threshold-based classifications better discriminated between patients with higher and lower absolute and standardized QALE than clustering-based subgroups. For MOD, hypothetical interventions costing up to 1973(951973 (95%CI 1444-2603)and£463(952603) and £463 (95%CI £345-£603) per year would be cost-effective at 100,000 and £20,000 per QALY thresholds in the U.S. and U.K., respectively. The MOD, SIDD and SIRD subgroups had the best potential cost-effectiveness alongside the subgroup with high HbA1c and high SCORE. Conclusions: Intensified treatment could be cost-effective at higher-than-average costs for three out of five of the clustering-based subgroups and two out of four of the threshold-based classifications. Both classification methods support priority setting for intensive treatment, but the threshold-based method may better identify those who have the most potential to benefit from intensified treatment

    The Potential Value of Clustering-Based Type 2 Diabetes Subgroups for Guiding Intensive Treatment:A Maximum Cost-Based Comparison with Threshold-Based Classifications

    No full text
    Aim: To quantify and compare the performance of setting priority for intensive treatment by novel clustering-based subgroups to threshold-based classifications using systematic coronary risk evaluation (SCORE) and HbA1c levels. Methods: We divided 2,935 patients from the Hoorn diabetes care system cohort into five clustering-based subgroups: severe insulin-deficient (SIDD) , severe insulin-resistant (SIRD) , mild obesity-related (MOD) mild (MD) , and mild with high HDL-cholesterol (MDH) diabetes, and four risk threshold-based subgroups. The United Kingdom Prospective Diabetes Study Outcomes Model was used to simulate lifetime health outcomes in the U.S. and U.K. settings. Gains from hypothetical treatment scenarios based on clinical guidelines were compared to “care-as-usual” and expressed in incremental quality-adjusted life-expectancy (QALE) and complication costs. Results: The SIRD and MOD subgroup had the lowest absolute and age-sex-standardized QALE, respectively (7.90; 9.07) . Threshold-based classifications better discriminated between patients with higher and lower absolute and standardized QALE than clustering-based subgroups. For MOD, hypothetical interventions costing up to 1973(951973 (95%CI 1444-2603)and£463(952603) and £463 (95%CI £345-£603) per year would be cost-effective at 100,000 and £20,000 per QALY thresholds in the U.S. and U.K., respectively. The MOD, SIDD and SIRD subgroups had the best potential cost-effectiveness alongside the subgroup with high HbA1c and high SCORE. Conclusions: Intensified treatment could be cost-effective at higher-than-average costs for three out of five of the clustering-based subgroups and two out of four of the threshold-based classifications. Both classification methods support priority setting for intensive treatment, but the threshold-based method may better identify those who have the most potential to benefit from intensified treatment

    Potential Value of Identifying Type 2 Diabetes Subgroups for Guiding Intensive Treatment: A Comparison of Novel Data-Driven Clustering With Risk-Driven Subgroups

    Get PDF
    Objective To estimate the impact on lifetime health and economic outcomes of different methods of stratifying individuals with type 2 diabetes, followed by guideline-based treatment intensification targeting BMI and LDL in addition to HbA1c. Research design and methods We divided 2,935 newly diagnosed individuals from the Hoorn Diabetes Care System (DCS) cohort into five Risk Assessment and Progression of Diabetes (RHAPSODY) data-driven clustering subgroups (based on age, BMI, HbA1c, C-peptide, and HDL) and four risk-driven subgroups by using fixed cutoffs for HbA1c and risk of cardiovascular disease based on guidelines. The UK Prospective Diabetes Study Outcomes Model 2 estimated discounted expected lifetime complication costs and quality-adjusted life-years (QALYs) for each subgroup and across all individuals. Gains from treatment intensification were compared with care as usual as observed in DCS. A sensitivity analysis was conducted based on Ahlqvist subgroups. Results Under care as usual, prognosis in the RHAPSODY data-driven subgroups ranged from 7.9 to 12.6 QALYs. Prognosis in the risk-driven subgroups ranged from 6.8 to 12.0 QALYs. Compared with homogenous type 2 diabetes, treatment for individuals in the high-risk subgroups could cost 22.0% and 25.3% more and still be cost effective for data-driven and risk-driven subgroups, respectively. Targeting BMI and LDL in addition to HbA1c might deliver up to 10-fold increases in QALYs gained. Conclusions Risk-driven subgroups better discriminated prognosis. Both stratification methods supported stratified treatment intensification, with the risk-driven subgroups being somewhat better in identifying individuals with the most potential to benefit from intensive treatment. Irrespective of stratification approach, better cholesterol and weight control showed substantial potential for health gains
    corecore