8 research outputs found

    Brain Age from the Electroencephalogram of Sleep

    Get PDF
    The human electroencephalogram (EEG) of sleep undergoes profound changes with age. These changes can be conceptualized as "brain age", which can be compared to an age norm to reflect the deviation from normal aging process. Here, we develop an interpretable machine learning model to predict brain age based on two large sleep EEG datasets: the Massachusetts General Hospital sleep lab dataset (MGH, N = 2,621) covering age 18 to 80; and the Sleep Hearth Health Study (SHHS, N = 3,520) covering age 40 to 80. The model obtains a mean absolute deviation of 8.1 years between brain age and chronological age in the healthy participants in the MGH dataset. As validation, we analyze a subset of SHHS containing longitudinal EEGs 5 years apart, which shows a 5.5 years difference in brain age. Participants with neurological and psychiatric diseases, as well as diabetes and hypertension medications show an older brain age compared to chronological age. The findings raise the prospect of using sleep EEG as a biomarker for healthy brain aging

    How AI should be used in radiology: assessing ambiguity and completeness of intended use statements of commercial AI products

    No full text
    Abstract Background Intended use statements (IUSs) are mandatory to obtain regulatory clearance for artificial intelligence (AI)-based medical devices in the European Union. In order to guide the safe use of AI-based medical devices, IUSs need to contain comprehensive and understandable information. This study analyzes the IUSs of CE-marked AI products listed on AIforRadiology.com for ambiguity and completeness. Methods We retrieved 157 IUSs of CE-marked AI products listed on AIforRadiology.com in September 2022. Duplicate products (n = 1), discontinued products (n = 3), and duplicate statements (n = 14) were excluded. The resulting IUSs were assessed for the presence of 6 items: medical indication, part of the body, patient population, user profile, use environment, and operating principle. Disclaimers, defined as contra-indications or warnings in the IUS, were identified and compared with claims. Results Of 139 AI products, the majority (n = 78) of IUSs mentioned 3 or less items. IUSs of only 7 products mentioned all 6 items. The intended body part (n = 115) and the operating principle (n = 116) were the most frequently mentioned components, while the intended use environment (n = 24) and intended patient population (n = 29) were mentioned less frequently. Fifty-six statements contained disclaimers that conflicted with the claims in 13 cases. Conclusion The majority of IUSs of CE-marked AI-based medical devices lack substantial information and, in few cases, contradict the claims of the product. Critical relevance statement To ensure correct usage and to avoid off-label use or foreseeable misuse of AI-based medical devices in radiology, manufacturers are encouraged to provide more comprehensive and less ambiguous intended use statements. Key points • Radiologists must know AI products’ intended use to avoid off-label use or misuse. • Ninety-five percent (n = 132/139) of the intended use statements analyzed were incomplete. • Nine percent (n = 13) of the intended use statements held disclaimers contradicting the claim of the AI product. • Manufacturers and regulatory bodies must ensure that intended use statements are comprehensive. Graphical Abstrac

    Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review

    Get PDF
    Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments

    Grand-Challenge.org

    No full text
    A platform for end-to-end development of machine learning solutions in biomedical imagingIf you use this software, please cite it using these metadata

    Brain age from the electroencephalogram of sleep

    No full text
    The human electroencephalogram (EEG) of sleep undergoes profound changes with age. These changes can be conceptualized as “brain age (BA),” which can be compared to chronological age to reflect the degree of deviation from normal aging. Here, we develop an interpretable machine learning model to predict BA based on 2 large sleep EEG data sets: the Massachusetts General Hospital (MGH) sleep lab data set (N = 2532; ages 18–80); and the Sleep Heart Health Study (SHHS, N = 1974; ages 40–80). The model obtains a mean absolute deviation of 7.6 years between BA and chronological age (CA) in healthy participants in the MGH data set. As validation, a subset of SHHS containing longitudinal EEGs 5.2 years apart shows an average of 5.4 years increase in BA. Participants with significant neurological or psychiatric disease exhibit a mean excess BA, or “brain age index” (BAI = BA-CA) of 4 years relative to healthy controls. Participants with hypertension and diabetes have a mean excess BA of 3.5 years. The findings raise the prospect of using the sleep EEG as a potential biomarker for healthy brain aging

    Brain age from the electroencephalogram of sleep

    Full text link
    The human electroencephalogram (EEG) of sleep undergoes profound changes with age. These changes can be conceptualized as "brain age (BA)," which can be compared to chronological age to reflect the degree of deviation from normal aging. Here, we develop an interpretable machine learning model to predict BA based on 2 large sleep EEG data sets: the Massachusetts General Hospital (MGH) sleep lab data set (N = 2532; ages 18-80); and the Sleep Heart Health Study (SHHS, N = 1974; ages 40-80). The model obtains a mean absolute deviation of 7.6 years between BA and chronological age (CA) in healthy participants in the MGH data set. As validation, a subset of SHHS containing longitudinal EEGs 5.2 years apart shows an average of 5.4 years increase in BA. Participants with significant neurological or psychiatric disease exhibit a mean excess BA, or "brain age index" (BAI = BA-CA) of 4 years relative to healthy controls. Participants with hypertension and diabetes have a mean excess BA of 3.5 years. The findings raise the prospect of using the sleep EEG as a potential biomarker for healthy brain aging. Keywords: Brain age; EEG; Machine learning; Slee

    Automated Assessment of COVID-19 Reporting and Data System and Chest CT Severity Scores in Patients Suspected of Having COVID-19 Using Artificial Intelligence

    No full text
    Background: The coronavirus disease 2019 (COVID-19) pandemic has spread across the globe with alarming speed, morbidity, and mortality. Immediate triage of patients with chest infections suspected to be caused by COVID-19 using chest CT may be of assistance when results from definitive viral testing are delayed. Purpose: To develop and validate an artificial intelligence (AI) system to score the likelihood and extent of pulmonary COVID-19 on chest CT scans using the COVID-19 Reporting and Data System (CO-RADS) and CT severity scoring systems. Materials and Methods: The CO-RADS AI system consists of three deep-learning algorithms that automatically segment the five pulmonary lobes, assign a CO-RADS score for the suspicion of COVID-19, and assign a CT severity score for the degree of parenchymal involvement per lobe. This study retrospectively included patients who underwent a nonenhanced chest CT examination because of clinical suspicion of COVID-19 at two medical centers. The system was trained, validated, and tested with data from one of the centers. Data from the second center served as an external test set. Diagnostic performance and agreement with scores assigned by eight independent observers were measured using receiver operating characteristic analysis, linearly weighted kappa values, and classification accuracy. Results: A total of 105 patients (mean age, 62 years +/- 16 [standard deviation]; 61 men) and 262 patients (mean age, 64 years +/- 16; 154 men) were evaluated in the internal and external test sets, respectively. The system discriminated between patients with COVID-19 and those without COVID-19, with areas under the receiver operating characteristic curve of 0.95 (95% CI: 0.91, 0.98) and 0.88 (95% CI: 0.84, 0.93), for the internal and external test sets, respectively. Agreement with the eight human observers was moderate to substantial, with mean linearly weighted k values of 0.60 +/- 0.01 for CO-RADS scores and 0.54 +/- 0.01 for CT severity scores. Conclusion: With high diagnostic performance, the CO-RADS AI system correctly identified patients with COVID-19 using chest CT scans and assigned standardized CO-RADS and CT severity scores that demonstrated good agreement with findings from eight independent observers and generalized well to external data. (C) RSNA, 202

    Automated assessment of COVID-19 reporting and data system and chest CT severity scores in patients suspected of having COVID-19 using artificial intelligence

    No full text
    Background: The coronavirus disease 2019 (COVID-19) pandemic has spread across the globe with alarming speed, morbidity, and mortality. Immediate triage of patients with chest infections suspected to be caused by COVID-19 using chest CT may be of assistance when results from definitive viral testing are delayed. Purpose: To develop and validate an artificial intelligence (AI) system to score the likelihood and extent of pulmonary COVID-19 on chest CT scans using the COVID-19 Reporting and Data System (CO-RADS) and CT severity scoring systems. Materials and Methods: The CO-RADS AI system consists of three deep-learning algorithms that automatically segment the five pulmonary lobes, assign a CO-RADS score for the suspicion of COVID-19, and assign a CT severity score for the degree of parenchymal involvement per lobe. This study retrospectively included patients who underwent a nonenhanced chest CT examination because of clinical suspicion of COVID-19 at two medical centers. The system was trained, validated, and tested with data from one of the centers. Data from the second center served as an external test set. Diagnostic performance and agreement with scores assigned by eight independent observers were measured using receiver operating characteristic analysis, linearly weighted k values, and classification accuracy. Results: A total of 105 patients (mean age, 62 years 6 16 [standard deviation]; 61 men) and 262 patients (mean age, 64 years 6 16; 154 men) were evaluated in the internal and external test sets, respectively. The system discriminated between patients with COVID-19 and those without COVID-19, with areas under the receiver operating characteristic curve of 0.95 (95% CI: 0.91, 0.98) and 0.88 (95% CI: 0.84, 0.93), for the internal and external test sets, respectively. Agreement with the eight human observers was moderate to substantial, with mean linearly weighted k values of 0.60 6 0.01 for CO-RADS scores and 0.54 6 0.01 for CT severity scores. Conclusion: With high diagnostic performance, the CO-RADS AI system correctly identified patients with COVID-19 using chest CT scans and assigned standardized CO-RADS and CT severity scores that demonstrated good agreement with findings from eight independent observers and generalized well to external data
    corecore