Image segmentations produced by the AIMI Annotations initiative

Abstract

The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections. To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale . This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images. List of all tasks and IDC collections analyzed. File Segmentation Task IDC Collections LInks breast-fdg-pet-ct.zip FDG-avid lesions in breast from FDG PET/CT scans QIN-Breast model weights github kidney-ct.zip Kidney, Tumor, and Cysts from contrast enhanced CT scans TCGA-KIRC model weights github liver-ct.zip Liver from CT scans TCGA-LIHC model weights github liver-mr.zip Liver from T1 MRI scans TCGA-LIHC model weights github lung-ct.zip Lung and Nodules (3mm-30mm) from CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights 1 model weights 2 github lung-fdg-pet-ct.zip Lungs and FDG-avid lesions in the lung from FDG PET/CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights github prostate-mr.zip Prostate from T2 MRI scans ProstateX model weights github Likert Score Definition 5 Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change) 4 Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable 3 Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome 2 Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch 1 Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable. Each zip file in the collection correlates to a specific segmentation task. The common folder structure is ai-segmentations-dcm This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files qa-segmentations-dcm This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*) qa-results.csv CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 08/10/2023