2 research outputs found
Computer modeling of diabetes and Its transparency: a report on the Eighth Mount Hood Challenge
Objectives
The Eighth Mount Hood Challenge (held in St. Gallen, Switzerland, in September 2016) evaluated the transparency of model input documentation from two published health economics studies and developed guidelines for improving transparency in the reporting of input data underlying model-based economic analyses in diabetes.
Methods
Participating modeling groups were asked to reproduce the results of two published studies using the input data described in those articles. Gaps in input data were filled with assumptions reported by the modeling groups. Goodness of fit between the results reported in the target studies and the groups’ replicated outputs was evaluated using the slope of linear regression line and the coefficient of determination (R2). After a general discussion of the results, a diabetes-specific checklist for the transparency of model input was developed.
Results
Seven groups participated in the transparency challenge. The reporting of key model input parameters in the two studies, including the baseline characteristics of simulated patients, treatment effect and treatment intensification threshold assumptions, treatment effect evolution, prediction of complications and costs data, was inadequately transparent (and often missing altogether). Not surprisingly, goodness of fit was better for the study that reported its input data with more transparency. To improve the transparency in diabetes modeling, the Diabetes Modeling Input Checklist listing the minimal input data required for reproducibility in most diabetes modeling applications was developed.
Conclusions
Transparency of diabetes model inputs is important to the reproducibility and credibility of simulation results. In the Eighth Mount Hood Challenge, the Diabetes Modeling Input Checklist was developed with the goal of improving the transparency of input data reporting and reproducibility of diabetes simulation model results
Evaluating the ability of economic models of diabetes to simulate new cardiovascular outcomes trials : a report on the Ninth Mount Hood Diabetes Challenge
Objectives
The cardiovascular outcomes challenge examined the predictive accuracy of 10 diabetes models in estimating hard outcomes in 2 recent cardiovascular outcomes trials (CVOTs) and whether recalibration can be used to improve replication.
Methods
Participating groups were asked to reproduce the results of the Empagliflozin Cardiovascular Outcome Event Trial in Type 2 Diabetes Mellitus Patients (EMPA-REG OUTCOME) and the Canagliflozin Cardiovascular Assessment Study (CANVAS) Program. Calibration was performed and additional analyses assessed model ability to replicate absolute event rates, hazard ratios (HRs), and the generalizability of calibration across CVOTs within a drug class.
Results
Ten groups submitted results. Models underestimated treatment effects (ie, HRs) using uncalibrated models for both trials. Calibration to the placebo arm of EMPA-REG OUTCOME greatly improved the prediction of event rates in the placebo, but less so in the active comparator arm. Calibrating to both arms of EMPA-REG OUTCOME individually enabled replication of the observed outcomes. Using EMPA-REG OUTCOME–calibrated models to predict CANVAS Program outcomes was an improvement over uncalibrated models but failed to capture treatment effects adequately. Applying canagliflozin HRs directly provided the best fit.
Conclusions
The Ninth Mount Hood Diabetes Challenge demonstrated that commonly used risk equations were generally unable to capture recent CVOT treatment effects but that calibration of the risk equations can improve predictive accuracy. Although calibration serves as a practical approach to improve predictive accuracy for CVOT outcomes, it does not extrapolate generally to other settings, time horizons, and comparators. New methods and/or new risk equations for capturing these CV benefits are needed