30 research outputs found
Evaluating the Validation Process:Embracing Complexity and Transparency in Health Economic Modelling
Reimbursement decisions and price negotiation of healthcare interventions often rely on health economic model results. Such decisions affect resource allocation, patient outcomes and future healthcare choices. To ensure optimal decisions, assessing the validity of health economic models may be crucial. Validation involves much more than identifying (and hopefully correcting) errors in the model implementation. It also includes assessing the conceptual validity of the model and validation of the model input data, and checking whether the model’s predictions align sufficiently well with real-world data. In the context of health economics, validation can be defined as “the act of evaluating whether a model is a proper and sufficient representation of the system it is intended to represent in view of an application”, meaning that the model complies with what is known about the system and its outcomes provide a robust basis for decision making.[...]Validation of health economic models should be seen as a critical component of evidence-based decision making in healthcare. However, as of today, it still faces several important challenges, including the lack of consensus guidance and standardised procedures, the need for greater rigour or the question of who should oversee the validation process. To address these challenges, we encourage model developers, agencies requiring models for their decision making and editors of journals that publish models to recommend the use of state-of-the-art tools for reporting (and conducting) validations of health economic models, such as those mentioned in this editorial
Evaluating the Validation Process:Embracing Complexity and Transparency in Health Economic Modelling
Reimbursement decisions and price negotiation of healthcare interventions often rely on health economic model results. Such decisions affect resource allocation, patient outcomes and future healthcare choices. To ensure optimal decisions, assessing the validity of health economic models may be crucial. Validation involves much more than identifying (and hopefully correcting) errors in the model implementation. It also includes assessing the conceptual validity of the model and validation of the model input data, and checking whether the model’s predictions align sufficiently well with real-world data. In the context of health economics, validation can be defined as “the act of evaluating whether a model is a proper and sufficient representation of the system it is intended to represent in view of an application”, meaning that the model complies with what is known about the system and its outcomes provide a robust basis for decision making.[...]Validation of health economic models should be seen as a critical component of evidence-based decision making in healthcare. However, as of today, it still faces several important challenges, including the lack of consensus guidance and standardised procedures, the need for greater rigour or the question of who should oversee the validation process. To address these challenges, we encourage model developers, agencies requiring models for their decision making and editors of journals that publish models to recommend the use of state-of-the-art tools for reporting (and conducting) validations of health economic models, such as those mentioned in this editorial
Trusting the results of model-based economic analyses: is there a pragmatic validation solution?
Models have become a nearly essential component of health technology assessment. This is because the efficacy and safety data available from clinical trials are insufficient to provide the required estimates of impact of new interventions over long periods of time and for other populations and subgroups. Despite more than five decades of use of these decision-analytic models, decision makers are still often presented with poorly validated models and thus trust in their results is impaired. Among the reasons for this vexing situation are the artificial nature of the models, impairing their validation against observable data, the complexity in their formulation and implementation, the lack of data against which to validate the model results, and the challenges of short timelines and insufficient resources. This article addresses this crucial problem of achieving models that produce results that can be trusted and the resulting requirements for validation and transparency, areas where our field is currently deficient. Based on their differing perspectives and experiences, the authors characterize the situation and outline the requirements for improvement and pragmatic solutions to the problem o
Reproducibility and sensitivity to change of various methods to measure joint space width in osteoarthritis of the hip: a double reading of three different radiographic views taken with a three-year interval
Joint space width (JSW) and narrowing (JSN) measurements on radiographs are currently the best way to assess disease severity or progression in hip osteoarthritis, yet we lack data regarding the most accurate and sensitive measurement technique. This study was conducted to determine the optimal radiograph and number of readers for measuring JSW and JSN. Fifty pairs of radiographs taken three years apart were obtained from patients included in a structure modification trial in hip osteoarthritis. Three radiographs were taken with the patient standing: pelvis, target hip anteroposterior (AP) and oblique views. Two trained readers, blinded to each other's findings, time sequence and treatment, each read the six radiographs gathered for each patient twice (time interval ≥15 days), using a 0.1 mm graduated magnifying glass. Radiographs were randomly coded for each reading. The interobserver and intraobserver cross-sectional (M0 and M36) and longitudinal (M0–M36) reproducibilities were assessed using the intraclass coefficient (ICC) and Bland–Altman method for readers 1 and 2 and their mean. Sensitivity to change was estimated using the standardized response mean (SRM = change/standard deviation of change) for M0–M36 changes. For interobserver reliability on M0–M36 changes, the ICCs (95% confidence interval [CI]) were 0.79 (0.65–0.88) for pelvic view, 0.87 (0.78–0.93) for hip AP view and 0.86 (0.76–0.92) for oblique view. Intraobserver reliability ICCs were 0.81 (0.69–0.89) for observer 1 and 0.97 (0.95–0.98) for observer 2 for the pelvic view; 0.87 (0.78–0.92) and 0.97 (0.96–0.99) for the hip AP view; and 0.73 (0.57–0.84) and 0.93 (0.88–0.96) for the oblique view. SRMs were 0.61 (observer 1) and 0.82 (observer 2) for pelvic view; 0.64 and 0.75 for hip AP view; and 0.77 and 0.70 for oblique view. All three views yielded accurate JSW and JSN. According to the best reader, the pelvic view performed slightly better. Both readers exhibited high precision, with SRMs of 0.6 or greater for assessing JSN over three years. Selecting a single reader was the most accurate method, with 0.3 mm precision. Using this cutoff, 50% of patients were classified as 'progressors'
Three Drug Combinations for Late-Stage Trypanosoma brucei gambiense Sleeping Sickness: A Randomized Clinical Trial in Uganda
OBJECTIVES: Our objective was to compare the efficacy and safety of three drug combinations for the treatment of late-stage human African trypanosomiasis caused by Trypanosoma brucei gambiense. DESIGN: This trial was a randomized, open-label, active control, parallel clinical trial comparing three arms. SETTING: The study took place at the Sleeping Sickness Treatment Center run by Médecins Sans Frontières at Omugo, Arua District, Uganda PARTICIPANTS: Stage 2 patients diagnosed in Northern Uganda were screened for inclusion and a total of 54 selected. INTERVENTIONS: Three drug combinations were given to randomly assigned patients: melarsoprol-nifurtimox (M+N), melarsoprol-eflornithine (M+E), and nifurtimox-eflornithine (N+E). Dosages were uniform: intravenous (IV) melarsoprol 1.8 mg/kg/d, daily for 10 d; IV eflornithine 400 mg/kg/d, every 6 h for 7 d; oral nifurtimox 15 (adults) or 20 (children <15 y) mg/kg/d, every 8 h for 10 d. Patients were followed up for 24 mo. OUTCOME MEASURES: Outcomes were cure rates and adverse events attributable to treatment. RESULTS: Randomization was performed on 54 patients before enrollment was suspended due to unacceptable toxicity in one of the three arms. Cure rates obtained with the intention to treat analysis were M+N 44.4%, M+E 78.9%, and N+E 94.1%, and were significantly higher with N+E (p = 0.003) and M+E (p = 0.045) than with M+N. Adverse events were less frequent and less severe with N+E, resulting in fewer treatment interruptions and no fatalities. Four patients died who were taking melarsoprol-nifurtimox and one who was taking melarsoprol-eflornithine. CONCLUSIONS: The N+E combination appears to be a promising first-line therapy that may improve treatment of sleeping sickness, although the results from this interrupted study do not permit conclusive interpretations. Larger studies are needed to continue the evaluation of this drug combination in the treatment of T. b. gambiense sleeping sickness
In-hospital safety in field conditions of Nifurtimox Eflornithine Combination Therapy (NECT) for T. B. Gambiense Sleeping Sickness
Trypanosoma brucei (T.b.) gambiense Human African trypanosomiasis (HAT; sleeping sickness) is a fatal disease. Until 2009, available treatments for 2(nd) stage HAT were complicated to use, expensive (eflornithine monotherapy), or toxic, and insufficiently effective in certain areas (melarsoprol). Recently, nifurtimox-eflornithine combination therapy (NECT) demonstrated good safety and efficacy in a randomised controlled trial (RCT) and was added to the World Health Organisation (WHO) essential medicines list (EML). Documentation of its safety profile in field conditions will support its wider use
Immunogenicity of Fractional Doses of Tetravalent A/C/Y/W135 Meningococcal Polysaccharide Vaccine: Results from a Randomized Non-Inferiority Controlled Trial in Uganda
Meningitis are infections of the lining of the brain and spinal cord and can cause high fever, blood poisoning, and brain damage, as well as result in death in up to 10% of cases. Epidemics of meningitis occur almost every year in parts of sub-Saharan Africa, throughout a high-burden area spanning Senegal to Ethiopia dubbed the “Meningitis Belt.” Most epidemics in Africa are caused by Neisseria meningitidis (mostly serogroup A and W135). Mass vaccination campaigns attempt to control epidemics by administering meningococcal vaccines targeted against these serogroups, among others. However, global shortages of these vaccines are currently seen. We studied the use of fractional (1/5 and 1/10) doses of a licensed vaccine to assess its non-inferiority compared with the normal full dose. In a randomized trial in Uganda, we found that immune response and safety using a 1/5 dose were comparable to full dose for three serogroups (A, Y, W135), though not a fourth (C). In light of current shortages of meningococcal vaccines and their importance in fighting meningitis epidemics around the world, we suggest fractional doses be taken under consideration in mass vaccination campaigns
Value of information analytical methods: Report 2 of the ISPOR value of information analysis emerging good practices task force
The allocation of health care resources among competing priorities requires an assessment of the expected costs and health effects of investing resources in the activities, and on the opportunity cost of the expenditure. To date, much effort has been devoted to assessing the expected costs and health effects, but there remains an important need to also reflect the consequences of uncertainty in resource allocation decisions and the value of further research to reduce uncertainty. Decision-making with uncertainty may turn out to be suboptimal, resulting in health loss. Consequently, there may be value in reducing uncertainty, through the collection of new evidence, to better inform resource decisions. This value can be quantified using
Value of Information (VOI) analysis. This report, from the ISPOR VOI Task Force, describes methods for computing four VOI measures: the Expected Value of Perfect Information (EVPI), Expected Value of Partial Perfect Information (EVPPI), Expected Value of Sample Information (EVSI) and Expected Net Benefit of Sampling (ENBS). Several methods exist for computing EVPPI and EVSI, and this report provides guidance on selecting the most appropriate method based on the features of the decision problem. The report provides a number of recommendations for good practice when planning, undertaking or reviewing VOI analyses. The software needed to compute VOI is discussed, and areas for future research are highlighted
Economic Evaluations of Anticancer Drugs Based on Medico-Administrative Databases: A Systematic Literature Review
International audienceBackground: Oncology is among the most active therapeutic fields in terms of new drug development projects, with increasingly expensive drugs. The expected clinical benefit and cost effectiveness of these treatments in clinical practice have yet to be fully confirmed. Health medico-administrative databases may be useful for assessing the value of anticancer drugs with real-world data.Objective: The objectives of our systematic literature review (SLR) were to analyse economic evaluations of anticancer drugs based on health medico-administrative databases, to assess the quality of these evaluations, and to identify the inputs from such databases that can be used in economic evaluations of anticancer drugs.Methods: We performed an SLR by using PubMed and Web of Science articles published from January 2008 to January 2019. The search strategy focused on anticancer drug cost-effectiveness analyses (CEAs)/cost-utility analyses (CUAs) that were entirely based on medico-administrative databases. The review reported the main choices of economic evaluation methods in the analyses. The quality of the articles was assessed using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) and risk of bias assessment checklists.Results: Of the 306 records identified in PubMed, 12 articles were selected, and one additional article was identified through Web of Science. Ten of the 13 articles were CEAs and three were CUAs. Most of the analyses were carried out in North America (n = 11). The economic metric used was the cost per life-year gained (n = 10) or cost per quality-adjusted life-year (n = 3). Reporting of the target analysis population and strategies in the articles was in agreement with the CHEERS guidelines. The structural assumptions underpinning the economic models displayed the poorest reporting quality among the items analysed. Representativeness bias (n = 11) and the issue of censored medical costs (n = 8) were the most frequently analysed risks.Conclusion: A comparison of the economic results was not relevant due to the high heterogeneity of the selected studies. Our SLR highlighted the benefits and pitfalls related to the use of medico-administrative databases in the economic evaluations of anticancer drugs
Towards a New Framework for Addressing Structural Uncertainty in Health Technology Assessment Guidelines
International audienceProviding scientific advice and recommendations for public decision making entails identifying, selecting and weighing evidence derived from multiple sources of information through a systematic approach, while taking into account ethical, cultural and societal factors. Integrated in the evaluation process are exchanges between regulatory agencies, private firms, scientific experts and government representatives. In the case of drugs and medical devices, health technology assessment (HTA) agencies are increasingly commissioned to evaluate innovations in order to provide government with recommendations and advice on reimbursement and/or pricing. To undertake this task, HTA agencies [1–6] in Europe and elsewhere have developed methodological guidelines on the economic evaluation of health technologies [7]. One component of these guidelines deals with ways for both manufacturers (pharmaceutical and medical device firms) and HTA agencies evaluators (modelers, economists and public health experts) to address uncertainty. Several types of uncertainty have indeed been identified in HTA: methodological, parameter and structural uncertainty. Most guidelines describe quite well how to deal with the first two categories, although there is still room for improvement. However, recommendations about how to tackle structural uncertainty remain largely elusive. HTA agencies and decision makers may thus be exposed to oversimplifying assessments and recommendations by putting aside complex forms of uncertainty such as struc-tural ‘deep’ uncertainty [8]. The editorial is not intended to promote new approaches to exploring structural uncertainty, rather to emphasizeconcerns related to the topic, such as definition and analysis. Our aim is therefore to highlight the need to renew the analytical framework guidance for HTA