2 research outputs found

    Clinically focused multi-cohort benchmarking as a tool for external validation of artificial intelligence algorithm performance in basic chest radiography analysis

    Get PDF
    Artificial intelligence (AI) algorithms evaluating [supine] chest radiographs ([S]CXRs) have remarkably increased in number recently. Since training and validation are often performed on subsets of the same overall dataset, external validation is mandatory to reproduce results and reveal potential training errors. We applied a multicohort benchmarking to the publicly accessible (S)CXR analyzing AI algorithm CheXNet, comprising three clinically relevant study cohorts which differ in patient positioning ([S]CXRs), the applied reference standards (CT-/[S]CXR-based) and the possibility to also compare algorithm classification with different medical experts’ reading performance. The study cohorts include [1] a cohort, characterized by 563 CXRs acquired in the emergency unit that were evaluated by 9 readers (radiologists and non-radiologists) in terms of 4 common pathologies, [2] a collection of 6,248 SCXRs annotated by radiologists in terms of pneumothorax presence, its size and presence of inserted thoracic tube material which allowed for subgroup and confounding bias analysis and [3] a cohort consisting of 166 patients with SCXRs that were evaluated by radiologists for underlying causes of basal lung opacities, all of those cases having been correlated to a timely acquired computed tomography scan (SCXR and CT within < 90 min). CheXNet non-significantly exceeded the radiology resident (RR) consensus in the detection of suspicious lung nodules (cohort [1], AUC AI/RR: 0.851/0.839, p = 0.793) and the radiological readers in the detection of basal pneumonia (cohort [3], AUC AI/reader consensus: 0.825/0.782, p = 0.390) and basal pleural effusion (cohort [3], AUC AI/reader consensus: 0.762/0.710, p = 0.336) in SCXR, partly with AUC values higher than originally published (“Nodule”: 0.780, “Infiltration”: 0.735, “Effusion”: 0.864). The classifier “Infiltration” turned out to be very dependent on patient positioning (best in CXR, worst in SCXR). The pneumothorax SCXR cohort [2] revealed poor algorithm performance in CXRs without inserted thoracic material and in the detection of small pneumothoraces, which can be explained by a known systematic confounding error in the algorithm training process. The benefit of clinically relevant external validation is demonstrated by the differences in algorithm performance as compared to the original publication. Our multi-cohort benchmarking finally enables the consideration of confounders, different reference standards and patient positioning as well as the AI performance comparison with differentially qualified medical readers

    COVID-19 Pandemic and Upcoming Influenza Season—Does an Expert’s Computed Tomography Assessment Differentially Identify COVID-19, Influenza and Pneumonias of Other Origin?

    No full text
    (1) Background: Time-consuming SARS-CoV-2 RT-PCR suffers from limited sensitivity in early infection stages whereas fast available chest CT can already raise COVID-19 suspicion. Nevertheless, radiologists&rsquo; performance to differentiate COVID-19, especially from influenza pneumonia, is not sufficiently characterized. (2) Methods: A total of 201 pneumonia CTs were identified and divided into subgroups based on RT-PCR: 78 COVID-19 CTs, 65 influenza CTs and 62 Non-COVID-19-Non-influenza (NCNI) CTs. Three radiology experts (blinded from RT-PCR results) raised pathogen-specific suspicion (separately for COVID-19, influenza, bacterial pneumonia and fungal pneumonia) according to the following reading scores: 0&mdash;not typical/1&mdash;possible/2&mdash;highly suspected. Diagnostic performances were calculated with RT-PCR as a reference standard. Dependencies of radiologists&rsquo; pathogen suspicion scores were characterized by Pearson&rsquo;s Chi2 Test for Independence. (3) Results: Depending on whether the intermediate reading score 1 was considered as positive or negative, radiologists correctly classified 83&ndash;85% (vs. NCNI)/79&ndash;82% (vs. influenza) of COVID-19 cases (sensitivity up to 94%). Contrarily, radiologists correctly classified only 52&ndash;56% (vs. NCNI)/50&ndash;60% (vs. COVID-19) of influenza cases. The COVID-19 scoring was more specific than the influenza scoring compared with suspected bacterial or fungal infection. (4) Conclusions: High-accuracy COVID-19 detection by CT might expedite patient management even during the upcoming influenza season
    corecore