39 research outputs found

    Estimating and Extrapolating Survival Using a State-Transition Modeling Approach:A Practical Application in Multiple Myeloma

    Get PDF
    Objectives: State-transition models (STMs) applied in oncology have given limited considerations to modeling postprogression survival data. This study presents an application of an STM focusing on methods to evaluate the postprogression transition and its impact on survival predictions. Methods: Data from the lenalidomide plus dexamethasone arm of the ASPIRE trial was used to estimate transition rates for an STM. The model accounted for the competing risk between the progression and preprogression death events and included an explicit structural link between the time to progression and subsequent death. The modeled transition rates were used to simulate individual disease trajectories in a discrete event simulation framework, based on which progression-free survival and overall survival over a 30-year time horizon were estimated. Survival predictions were compared with the observed trial data, matched external data, and estimates obtained from a more conventional partitioned survival analysis approach. Results: The rates of progression and preprogression death were modeled using piecewise exponential functions. The rate of postprogression mortality was modeled using an exponential function accounting for the nonlinear effect of the time to progression. The STM provided survival estimates that closely fitted the trial data and gave more plausible long-term survival predictions than the best-fitting Weibull model applied in a partitioned survival analysis. Conclusions: The fit of the STM suggested that the modeled transition rates accurately captured the underlying disease process over the modeled time horizon. The considerations of this study may apply to other settings and facilitate a wider use of STMs in oncology

    Identifying key factors for the effectiveness of pancreatic cancer screening:A model-based analysis

    Get PDF
    Pancreatic cancer (PC) survival is poor, as detection usually occurs late, when treatment options are limited. Screening of high-risk individuals may enable early detection and a more favorable prognosis. Knowledge gaps prohibit establishing the effectiveness of screening. We developed a Microsimulation Screening Analysis model to analyze the impact of relevant uncertainties on the effect of PC screening in high-risk individuals. The model simulates two base cases: one in which lesions always progress to PC and one in which indolent and faster progressive lesions coexist. For each base case, the effect of annual and 5-yearly screening with endoscopic ultrasonography/magnetic resonance imaging was evaluated. The impact of variance in PC risk, screening test characteristics and surgery-related mortality was evaluated using sensitivity analyses. Screening resulted in a reduction of PC mortality by at least 16% in all simulated scenarios. This reduction depended strongly on the natural disease course (annual screening: −57% for “Progressive-only” vs −41% for “Indolent Included”). The number of screen and surveillance tests needed to prevent one cancer death was impacted most by PC risk. A 10% increase in test sensitivity reduced mortality by 1.9% at most. Test specificity is important for the number of surveillance tests. In conclusion, screening reduces PC mortality in all modeled scenarios. The natural disease course and PC risk strongly determines the effectiveness of screening. Test sensitivity seems of lesser influence than specificity. Future research should gain more insight in PC pathobiology to establish the true value of PC screening in high-risk individuals.</p

    Optimizing Management of Patients With Barrett's Esophagus and Low-Grade or No Dysplasia Based on Comparative Modeling

    Get PDF
    Background & Aims: Endoscopic treatment is recommended for patients with Barrett's esophagus (BE) with high-grade dysplasia, yet clinical management recommendations are inconsistent for patients with BE without dysplasia (NDBE) or with low-grade dysplasia (LGD). We used a comparative modeling analysis to identify optimal management strategies for these patients. Methods: We used 3 independent population-based models to simulate cohorts of 60-year-old individuals with BE in the United States. We followed up each cohort until death without surveillance and treatment (natural disease progression), compared with 78 different strategies of management for patients with NDBE or LGD. We determined the optimal strategy using cost-effectiveness analyses, at a willingness-to-pay threshold of 100,000perqualityadjustedlifeyear(QALY).Results:Inthe3models,theaveragecumulativeincidenceofesophagealadenocarcinomawas111cases,withcoststotaling100,000 per quality-adjusted life-year (QALY). Results: In the 3 models, the average cumulative incidence of esophageal adenocarcinoma was 111 cases, with costs totaling 5.7 million per 1000 men with BE. Surveillance and treatment of men with BE prevented 23% to 75% of cases of esophageal adenocarcinoma, but increased costs to 6.2to6.2 to 17.3 million per 1000 men with BE. The optimal strategy was surveillance every 3 years for men with NDBE and treatment of LGD after confirmation by repeat endoscopy (incremental cost-effectiveness ratio, 53,044/QALY).TheaverageresultsforwomenwereconsistentwiththeresultsformenforLGDmanagement,buttheoptimalsurveillanceintervalforwomenwithNDBEwas5years(incrementalcosteffectivenessratio,53,044/QALY). The average results for women were consistent with the results for men for LGD management, but the optimal surveillance interval for women with NDBE was 5 years (incremental cost-effectiveness ratio, 36,045/QALY). Conclusions: Based on analyses from 3 population-based models, the optimal management strategy for patient with BE and LGD is endoscopic eradication, but only after LGD is confirmed by a repeat endoscopy. The optimal strategy for patients with NDBE is endoscopic surveillance, using a 3-year interval for men and a 5-year interval for women

    High-dose alkylating chemotherapy in BRCA-altered triple-negative breast cancer: the randomized phase III NeoTN trial

    Get PDF
    Exploratory analyses of high-dose alkylating chemotherapy trials have suggested that BRCA1 or BRCA2-pathway altered (BRCA-altered) breast cancer might be particularly sensitive to this type of treatment. In this study, patients with BRCA-altered tumors who had received three initial courses of dose-dense doxorubicin and cyclophosphamide (ddAC), were randomized between a fourth ddAC course followed by high-dose carboplatin-thiotepa-cyclophosphamide or conventional chemotherapy (initially ddAC only or ddAC-capecitabine/decetaxel [CD] depending on MRI response, after amendment ddAC-carboplatin/paclitaxel [CP] for everyone). The primary endpoint was the neoadjuvant response index (NRI). Secondary endpoints included recurrence-free survival (RFS) and overall survival (OS). In total, 122 patients were randomized. No difference in NRI-score distribution (p = 0.41) was found. A statistically non-significant RFS difference was found (HR 0.54; 95% CI 0.23–1.25; p = 0.15). Exploratory RFS analyses showed benefit in stage III (n = 35; HR 0.16; 95% CI 0.03–0.75), but not stage II (n = 86; HR 1.00; 95% CI 0.30–3.30) patients. For stage III, 4-year RFS was 46% (95% CI 24–87%), 71% (95% CI 48–100%) and 88% (95% CI 74–100%), for ddAC/ddAC-CD, ddAC-CP and high-dose chemotherapy, respectively. No significant differences were found between high-dose and conventional chemotherapy in stage II-III, triple-negative, BRCA-altered breast cancer patients. Further research is needed to establish if there are patients with stage III, triple negative BRCA-altered breast cancer for whom outcomes can be improved with high-dose alkylating chemotherapy or whether the current standard neoadjuvant therapy including carboplatin and an immune checkpoint inhibitor is sufficient. Trial Registration: NCT01057069

    Calibrating Parameters for Microsimulation Disease Models: A Review and Comparison of Different Goodness-of-Fit Criteria

    No full text
    Background. Calibration (estimation of model parameters) compares model outcomes with observed outcomes and explores possible model parameter values to identify the set of values that provides the best fit to the data. The goodness-of-fit (GOF) criterion quantifies the difference between model and observed outcomes. There is no consensus on the most appropriate GOF criterion, because a direct performance comparison of GOF criteria in model calibration is lacking. Methods. We systematically compared the performance of commonly used GOF criteria (sum of squared errors [SSE], Pearson chi-square, and a likelihood-based approach [Poisson and/or binomial deviance functions]) in the calibration of selected parameters of the MISCAN-Colon microsimulation model for colorectal cancer. The performance of each GOF criterion was assessed by comparing the 1) root mean squared prediction error (RMSPE) of the selected parameters, 2) computation time of the calibration procedure of various calibration scenarios, and 3) impact on estimated cost-effectiveness ratios. Results. The likelihood-based deviance resulted in the lowest RMSPE in 4 of 6 calibration scenarios and was close to best in the other 2. The SSE had a 25 times higher RMSPE in a scenario with considerable differences in the values of observed outcomes, whereas the Pearson chi-square had a 60 times higher RMSPE in a scenario with multiple studies measuring the same outcome. In all scenarios, the SSE required the most computation time. The likelihood-based approach estimated the cost-effectiveness ratio most accurately (up to −0.15% relative difference versus 0.44% [SSE] and 13% [Pearson chi-square]). Conclusions. The likelihood-based deviance criteria lead to accurate estimation of parameters under various circumstances. These criteria are recommended for calibration in microsimulation disease models in contrast with other commonly used criteria. </jats:p
    corecore