13 research outputs found

    Decrease in Incidence of Colorectal Cancer Among Individuals 50 Years or Older After Recommendations for Population-based Screening

    Get PDF
    BACKGROUND & AIMS: The incidence of colorectal cancer (CRC) in the United States is increasing among adults younger than 50 years, but incidence has decreased among older populations after population-based screening was recommended in the late 1980s. Blacks have higher incidence than whites. These patterns have prompted suggestions to lower the screening age for average-risk populations or in blacks. At the same time, there has been controversy over whether reductions in CRC incidence can be attributed to screening. We examined age-related and race-related differences in CRC incidence during a 40-year time period. METHODS: We determined the age-standardized incidence of CRC from 1975 through 2013 by using the population-based Surveillance, Epidemiology, and End Results (SEER) program of cancer registries. We calculated incidence for 5-year age categories (20-24 years through 80-84 years and 85 years or older) for different time periods (1975-1979, 1980-1984, 1985-1989, 1990-1994, 1995-1999, 2000-2004, 2005-2009, and 2010-2013), tumor subsite (proximal colon, descending colon, and rectum), and stages at diagnosis (localized, regional, and distant). Analyses were stratified by race (white vs black). RESULTS: There were 450,682 incident cases of CRC reported to the SEER registries during the entire period (1975-2013). Overall incidence was 75.5/100,000 white persons and 83.6/100,000 black persons. CRC incidence peaked during 1980 through 1989 and began to decrease in 1990. In whites and blacks, the decreases in incidence between the time periods of 1980-1984 and 2010-2013 were limited to the screening-age population (ages 50 years or older). Between these time periods, there was 40% decrease in incidence among whites compared with 26% decrease in incidence among blacks. Decreases in incidence were greater for cancers of the distal colon and rectum, and reductions in these cancers were greater among whites than blacks. CRC incidence among persons younger than 50 years decreased slightly between 1975-1979 and 1990. However, among persons 20-49 years old, CRC incidence increased from 8.3/100,000 persons in 1990-1994 to 11.4/100,000 persons in 2010-2013; incidence rates in younger adults were similar for whites and blacks. CONCLUSIONS: On the basis of an analysis of the SEER cancer registries from 1975 through 2013, CRC incidence decreased only among individuals 50 years or older between the time periods of 1980-1984 and 2010-2013. Incidence increased modestly among individuals 20-49 years old between the time periods of 1990-1994 and 2010-2013. The decision of whether to recommend screening for younger populations requires a formal analysis of risks and benefits. Our observed trends provide compelling evidence that screening has had an important role in reducing CRC incidence

    Clinical utility of genetic testing in 201 preschool children with inherited eye disorders

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: received 2019-09-23, registration 2019-11-25, accepted 2019-11-25, pub-electronic 2019-12-18, online 2019-12-18, pub-print 2020-04-01Publication status: PublishedAbstract: Purpose: A key property to consider in all genetic tests is clinical utility, the ability of the test to influence patient management and health outcomes. Here we assess the current clinical utility of genetic testing in diverse pediatric inherited eye disorders (IEDs). Methods: Two hundred one unrelated children (0–5 years old) with IEDs were ascertained through the database of the North West Genomic Laboratory Hub, Manchester, UK. The cohort was collected over a 7-year period (2011–2018) and included 74 children with bilateral cataracts, 8 with bilateral ectopia lentis, 28 with bilateral anterior segment dysgenesis, 32 with albinism, and 59 with inherited retinal disorders. All participants underwent panel-based genetic testing. Results: The diagnostic yield of genetic testing for the cohort was 64% (ranging from 39% to 91% depending on the condition). The test result led to altered management (including preventing additional investigations or resulting in the introduction of personalized surveillance measures) in 33% of probands (75% for ectopia lentis, 50% for cataracts, 33% for inherited retinal disorders, 7% for anterior segment dysgenesis, 3% for albinism). Conclusion: Genetic testing helped identify an etiological diagnosis in the majority of preschool children with IEDs. This prevented additional unnecessary testing and provided the opportunity for anticipatory guidance in significant subsets of patients

    Comparison of in silico strategies to prioritize rare genomic variants impacting RNA splicing for the diagnosis of genomic disorders

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: received 2021-03-09, accepted 2021-09-13, registration 2021-10-01, online 2021-10-18, pub-electronic 2021-10-18, collection 2021-12Publication status: PublishedFunder: Wellcome Trust; Grant(s): RP-2016-07-011, 200990/Z/16/ZFunder: Health Education EnglandAbstract: The development of computational methods to assess pathogenicity of pre-messenger RNA splicing variants is critical for diagnosis of human disease. We assessed the capability of eight algorithms, and a consensus approach, to prioritize 249 variants of uncertain significance (VUSs) that underwent splicing functional analyses. The capability of algorithms to differentiate VUSs away from the immediate splice site as being ‘pathogenic’ or ‘benign’ is likely to have substantial impact on diagnostic testing. We show that SpliceAI is the best single strategy in this regard, but that combined usage of tools using a weighted approach can increase accuracy further. We incorporated prioritization strategies alongside diagnostic testing for rare disorders. We show that 15% of 2783 referred individuals carry rare variants expected to impact splicing that were not initially identified as ‘pathogenic’ or ‘likely pathogenic’; one in five of these cases could lead to new or refined diagnoses

    A new polygenic score for refractive error improves detection of children at risk of high myopia but not the prediction of those at risk of myopic macular degeneration

    Get PDF
    Background High myopia (HM), defined as a spherical equivalent refractive error (SER) ≤ −6.00 diopters (D), is a leading cause of sight impairment, through myopic macular degeneration (MMD). We aimed to derive an improved polygenic score (PGS) for predicting children at risk of HM and to test if a PGS is predictive of MMD after accounting for SER. Methods The PGS was derived from genome-wide association studies in participants of UK Biobank, CREAM Consortium, and Genetic Epidemiology Research on Adult Health and Aging. MMD severity was quantified by a deep learning algorithm. Prediction of HM was quantified as the area under the receiver operating curve (AUROC). Prediction of severe MMD was assessed by logistic regression. Findings In independent samples of European, African, South Asian and East Asian ancestry, the PGS explained 19% (95% confidence interval 17–21%), 2% (1–3%), 8% (7–10%) and 6% (3–9%) of the variation in SER, respectively. The AUROC for HM in these samples was 0.78 (0.75–0.81), 0.58 (0.53–0.64), 0.71 (0.69–0.74) and 0.67 (0.62–0.72), respectively. The PGS was not associated with the risk of MMD after accounting for SER: OR = 1.07 (0.92–1.24). Interpretation Performance of the PGS approached the level required for clinical utility in Europeans but not in other ancestries. A PGS for refractive error was not predictive of MMD risk once SER was accounted fo

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries
    corecore