58 research outputs found

    Optimizing Outcomes of Colorectal Cancer Screening

    Get PDF
    Colorectal cancer is a leading cause of cancer deaths. Screening for colorectal cancer is implemented in an increasing number of settings, but performance of programs is often suboptimal. In this thesis, advanced modeling, informed by empirical data, was used to identify areas for improvement of screening programs. The thesis includes studies on the effect of the test used for screening, long-term adherence with screening, the quality of colorectal examinations, time to diagnostic examination, and risk-stratified screening

    Development and validation of colorectal cancer risk prediction tools:A comparison of models

    Get PDF
    Background: Identification of individuals at elevated risk can improve cancer screening programmes by permitting risk-adjusted screening intensities. Previous work introduced a prognostic model using sex, age and two preceding faecal haemoglobin concentrations to predict the risk of colorectal cancer (CRC) in the next screening round. Using data of 3 screening rounds, this model attained an area under the receiver-operating-characteristic curve (AUC) of 0.78 for predicting advanced neoplasia (AN). We validated this existing logistic regression (LR) model and attempted to improve it by applying a more flexible machine-learning approach. Methods: We trained an existing LR and a newly developed random forest (RF) model using updated data from 219,257 third-round participants of the Dutch CRC screening programme until 2018. For both models, we performed two separate out-of-sample validations using 1,137,599 third-round participants after 2018 and 192,793 fourth-round participants from 2020 onwards. We evaluated the AUC and relative risks of the predicted high-risk groups for the outcomes AN and CRC. Results: For third-round participants after 2018, the AUC for predicting AN was 0.77 (95% CI: 0.76–0.77) using LR and 0.77 (95% CI: 0.77–0.77) using RF. For fourth-round participants, the AUCs were 0.73 (95% CI: 0.72–0.74) and 0.73 (95% CI: 0.72–0.74) for the LR and RF models, respectively. For both models, the 5% with the highest predicted risk had a 7-fold risk of AN compared to average, whereas the lowest 80% had a risk below the population average for third-round participants. Conclusion: The LR is a valid risk prediction method in stool-based screening programmes. Although predictive performance declined marginally, the LR model still effectively predicted risk in subsequent screening rounds. An RF did not improve CRC risk prediction compared to an LR, probably due to the limited number of available explanatory variables. The LR remains the preferred prediction tool because of its interpretability.</p

    Prevalence and Clinical Features of Sessile Serrated Polyps: A Systematic Review

    Get PDF
    Background & Aims: Sessile serrated polyps (SSPs) could account for a substantial proportion of colorectal cancers. We aimed to increase clarity on SSP prevalence and clinical features. Methods: We performed a systematic review of MEDLINE, Web of Science, Embase, and Cochrane databases for original studies published in English since 2000. We included studies of different populations (United States general or similar), interventions (colonoscopy, autopsy), comparisons (world regions, alternative polyp definitions, adenoma), outcomes (prevalence, clinical features), and study designs (cross-sectional). Random-effects regression was used for meta-analysis where possible. Results: We identified 74 relevant colonoscopy studies. SSP prevalence varied by world region, from 2.6% in Asia (95% confidence interval [CI], 0–5.9) to 10.5% in Australia (95% CI, 2.8–18.2). Prevalence values did not differ significantly between the United States and Europe (P = .51); the pooled prevalence was 4.6% (95% CI, 3.4–5.8), and SSPs accounted for 9.4% of polyps with malignant potential (95% CI, 6.6–12.3). The mean prevalence was higher when assessed through high-performance examinations (9.1%; 95% CI, 4.0–14.2; P = .04) and with an alternative definition of clinically relevant serrated polyps (12.3%; 95% CI, 9.3–15.4; P < .001). Increases in prevalence with age were not statistically significant, and prevalence did not differ significantly by sex. Compared with adenomas, a higher proportion of SSPs were solitary (69.0%; 95% CI, 45.9–92.1; P = .08), with diameters of 10 mm or more (19.3%; 95% CI, 12.4–26.2; P = .13) and were proximal (71.5%; 95% CI, 63.5–79.5; P = .008). The mean ages for detection of SSP without dysplasia, with any or low-grade dysplasia, and with high-grade dysplasia were 60.8 years, 65.6 years, and 70.2 years, respectively. The range for proportions of SSPs with dysplasia was 3.7%–42.9% across studies, possibly reflecting different study populations. Conclusions: In a systematic review, we found that SSPs are relatively uncommon compared with adenoma. More research is needed on appropriate diagnostic criteria, variations in detection, and long-term risk

    Impact of colorectal cancer screening on cancer-specific mortality in Europe: A systematic review

    Get PDF
    Background: Populations differ with respect to their cancer risk and screening preferences, which may influence the performance of colorectal cancer (CRC) screening programs. This review aims to

    Risk-Stratified Screening for Colorectal Cancer Using Genetic and Environmental Risk Factors:A Cost-Effectiveness Analysis Based on Real-World Data

    Get PDF
    Background &amp; Aims: Previous studies on the cost-effectiveness of personalized colorectal cancer (CRC) screening were based on hypothetical performance of CRC risk prediction and did not consider the association with competing causes of death. In this study, we estimated the cost-effectiveness of risk-stratified screening using real-world data for CRC risk and competing causes of death. Methods: Risk predictions for CRC and competing causes of death from a large community-based cohort were used to stratify individuals into risk groups. A microsimulation model was used to optimize colonoscopy screening for each risk group by varying the start age (40–60 years), end age (70–85 years), and screening interval (5–15 years). The outcomes included personalized screening ages and intervals and cost-effectiveness compared with uniform colonoscopy screening (ages 45–75, every 10 years). Key assumptions were varied in sensitivity analyses. Results: Risk-stratified screening resulted in substantially different screening recommendations, ranging from a one-time colonoscopy at age 60 for low-risk individuals to a colonoscopy every 5 years from ages 40 to 85 for high-risk individuals. Nevertheless, on a population level, risk-stratified screening would increase net quality-adjusted life years gained (QALYG) by only 0.7% at equal costs to uniform screening or reduce average costs by 1.2% for equal QALYG. The benefit of risk-stratified screening improved when it was assumed to increase participation or costs less per genetic test. Conclusions: Personalized screening for CRC, accounting for competing causes of death risk, could result in highly tailored individual screening programs. However, average improvements across the population in QALYG and cost-effectiveness compared with uniform screening are small.</p

    The impact of the rising colorectal cancer incidence in young adults on the optimal age to start screening

    Get PDF
    BACKGROUND: In 2016, the Microsimulation Screening Analysis-Colon (MISCAN-Colon) model was used to inform the US Preventive Services Task Force colorectal cancer (CRC) screening guidelines. In this study, 1 of 2 microsimulation analyses to inform the update of the American Cancer Society CRC screening guideline, the authors re-evaluated the optimal screening strategies in light of the increase in CRC diagnosed in young adults. METHODS: The authors adjusted the MISCAN-Colon model to reflect the higher CRC incidence in young adults, who were assumed to carry forward escalated disease risk as they age. Life-years gained (LYG; benefit), the number of colonoscopies (COL; burden) and the ratios of incremental burden to benefit (efficiency ratio [ER] = ΔCOL/ΔLYG) were projected for different screening strategies. Strategies differed with respect to test modality, ages to start (40 years, 45 years, and 50 years) and ages to stop (75 years, 80 years, and 85 years) screening, and screening intervals (depending on screening modality). The authors then determined the model-recommended strategies in a similar way as was done for the US Preventive Services Task Force, using ER thresholds in accordance with the previously accepted ER of 39. RESULTS: Because of the higher CRC incidence, model-predicted LYG from screening increased compared with the previous analyses. Consequently, the balance of burden to benefit of screening improved and now 10-yearly colonoscopy screening starting at age 4

    Effect of time to diagnostic testing for breast, cervical, and colorectal cancer screening abnormalities on screening efficacy: A modeling study

    Get PDF
    Background: Patients who receive an abnormal cancer screening result require follow-up for diagnostic testing, but the time to follow-up varies across patients and practices. Methods: We used a simulation study to estimate the change in lifetime screening benefits when time to follow-up for breast, cervical, and colorectal cancers was increased. Estimates were based on four independently developed microsimulation models that each simulated the life course of adults eligible for breast (women ages 50–74 years), cervical (women ages 21–65 years), or colorectal (adults ages 50–75 years) cancer screening. We assumed screening based on biennial mammography for breast cancer, triennial Papanicolaou testing for cervical cancer, and annual fecal immunochemical testing for colorectal cancer. For each cancer type, we simulated diagnostic testing immediately and at 3, 6, and 12 months after an abnormal screening exam. Results: We found declines in screening benefit with longer times to diagnostic testing, particularly for breast cancer screening. Compared to immediate diagnostic testing, testing at 3 months resulted in reduced screening benefit, with fewer undiscounted life years gained per 1,000 screened (breast: 17.3%, cervical: 0.8%, colorectal: 2.0% and 2.7%, from two colorectal cancer models), fewer cancers prevented (cervical: 1.4% fewer, colorectal: 0.5% and 1.7% fewer, respectively), and, for breast and colorectal cancer, a less favorable stage distribution. Conclusions: Longer times to diagnostic testing after an abnormal screening test can decrease screening effectiveness, but the impact varies substantially by cancer type. Impact: Understanding the impact of time to diagnostic testing on screening effectiveness can help inform quality improvement efforts. Cancer Epidemiol Biomarkers Prev; 27(2); 158–64. 2017 AACR

    Optimizing colorectal cancer screening by race and sex

    Get PDF
    BACKGROUND: Colorectal cancer (CRC) risk varies by race and sex. This study, 1 of 2 microsimulation analyses to inform the 2018 American Cancer Society CRC screening guideline, explored the influence of race and sex on optimal CRC screening strategies. METHODS: Two Cancer Intervention and Surveillance Modeling Network microsimulation models, informed by US incidence data, were used to evaluate a variety of screening methods, ages to start and stop, and intervals for 4 demographic subgroups (black and white males and females) under 2 scenarios for the projected lifetime CRC risk for 40-year-olds: 1) assuming that risk had remained stable since the early screening era and 2) assuming that risk had increased proportionally to observed incidence trends under the age of 40 years. Model-based screening recommendations were based on the predicted level of benefit (life-years gained) and burden (required number of colonoscopies), the incremental burden-to-benefit ratio, and the relative efficiency in comparison with strategies with similar burdens. RESULTS: When lifetime CRC risk was assumed to be stable over time, the models differed in the recommended age to start screening for whites (45 vs 50 years) but consistently recommended screening from the age of 45 years for blacks. When CRC risk was assumed to be increased, the models recommended starting at the age of 45 years, regardless of race and sex. Strategies recommended under both scenarios included colonoscopy every 10 or 15 years, annual fecal immunochemical testing, and computed tomographic colonography every 5 years through the age of 75 years. CONCLUSIONS: Microsimulation modeling suggests that CRC screening should be considered from the age of 45 years for blacks and for whites if the lifetime risk has increased proportionally to the incidence for younger adults

    Combining Asian and European genome-wide association studies of colorectal cancer improves risk prediction across racial and ethnic populations

    Full text link
    Polygenic risk scores (PRS) have great potential to guide precision colorectal cancer (CRC) prevention by identifying those at higher risk to undertake targeted screening. However, current PRS using European ancestry data have sub-optimal performance in non-European ancestry populations, limiting their utility among these populations. Towards addressing this deficiency, we expand PRS development for CRC by incorporating Asian ancestry data (21,731 cases; 47,444 controls) into European ancestry training datasets (78,473 cases; 107,143 controls). The AUC estimates (95% CI) of PRS are 0.63(0.62-0.64), 0.59(0.57-0.61), 0.62(0.60-0.63), and 0.65(0.63-0.66) in independent datasets including 1681-3651 cases and 8696-115,105 controls of Asian, Black/African American, Latinx/Hispanic, and non-Hispanic White, respectively. They are significantly better than the European-centric PRS in all four major US racial and ethnic groups (p-values < 0.05). Further inclusion of non-European ancestry populations, especially Black/African American and Latinx/Hispanic, is needed to improve the risk prediction and enhance equity in applying PRS in clinical practice
    • …
    corecore