10 research outputs found

    Improving Lesion Segmentation in FDG-18 Whole-Body PET/CT scans using Multilabel approach: AutoPET II challenge

    Full text link
    Automatic segmentation of lesions in FDG-18 Whole Body (WB) PET/CT scans using deep learning models is instrumental for determining treatment response, optimizing dosimetry, and advancing theranostic applications in oncology. However, the presence of organs with elevated radiotracer uptake, such as the liver, spleen, brain, and bladder, often leads to challenges, as these regions are often misidentified as lesions by deep learning models. To address this issue, we propose a novel approach of segmenting both organs and lesions, aiming to enhance the performance of automatic lesion segmentation methods. In this study, we assessed the effectiveness of our proposed method using the AutoPET II challenge dataset, which comprises 1014 subjects. We evaluated the impact of inclusion of additional labels and data in the segmentation performance of the model. In addition to the expert-annotated lesion labels, we introduced eight additional labels for organs, including the liver, kidneys, urinary bladder, spleen, lung, brain, heart, and stomach. These labels were integrated into the dataset, and a 3D UNET model was trained within the nnUNet framework. Our results demonstrate that our method achieved the top ranking in the held-out test dataset, underscoring the potential of this approach to significantly improve lesion segmentation accuracy in FDG-18 Whole-Body PET/CT scans, ultimately benefiting cancer patients and advancing clinical practice.Comment: AutoPET II challenge pape

    Laser indicated occlusal plane device: A novel technique for occlusal plane orientation

    No full text
    Parallelism to the ala-tragus line is commonly used as a guide for the orientation of the occlusal plane with the help of a fox plane. The accuracy of parallelism is affected by improper judgment or patient movement. This report describes a method with a modified fox plane that aids in occlusal plane determination. The device is placed in the patient's mouth with the maxillary occlusal rim to determine parallelism to the ala-tragus line and interpupillary line. The adjustments are made until the laser light on the device runs parallel to the ala-tragus line, and the spirit bubble is centered between the lines of the tube. This technique facilitates direct visualization of parallelism, thereby avoiding parallax errors

    Image segmentations produced by the AIMI Annotations initiative

    No full text
    The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections. To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale . This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images. List of all tasks and IDC collections analyzed. File Segmentation Task IDC Collections LInks breast-fdg-pet-ct.zip FDG-avid lesions in breast from FDG PET/CT scans QIN-Breast model weights github kidney-ct.zip Kidney, Tumor, and Cysts from contrast enhanced CT scans TCGA-KIRC model weights github liver-ct.zip Liver from CT scans TCGA-LIHC model weights github liver-mr.zip Liver from T1 MRI scans TCGA-LIHC model weights github lung-ct.zip Lung and Nodules (3mm-30mm) from CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights 1 model weights 2 github lung-fdg-pet-ct.zip Lungs and FDG-avid lesions in the lung from FDG PET/CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights github prostate-mr.zip Prostate from T2 MRI scans ProstateX model weights github Likert Score Definition 5 Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change) 4 Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable 3 Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome 2 Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch 1 Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable. Each zip file in the collection correlates to a specific segmentation task. The common folder structure is ai-segmentations-dcm This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files qa-segmentations-dcm This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*) qa-results.csv CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance

    AI-derived and Manually corrected segmentations for various IDC Collections

    No full text
    The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections. To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale . This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images. List of all tasks and IDC collections analyzed. File Segmentation Task IDC Collections LInks breast-fdg-pet-ct.zip FDG-avid lesions in breast from FDG PET/CT scans QIN-Breast model weights github kidney-ct.zip Kidney, Tumor, and Cysts from contrast enhanced CT scans TCGA-KIRC model weights github liver-ct.zip Liver from CT scans TCGA-LIHC model weights github liver-mr.zip Liver from T1 MRI scans TCGA-LIHC model weights github lung-ct.zip Lung and Nodules (3mm-30mm) from CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights 1 model weights 2 github lung-fdg-pet-ct.zip Lungs and FDG-avid lesions in the lung from FDG PET/CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights github prostate-mr.zip Prostate from T2 MRI scans ProstateX model weights github Likert Score Definition 5 Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change) 4 Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable 3 Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome 2 Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch 1 Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable. Each zip file in the collection correlates to a specific segmentation task. The common folder structure is ai-segmentations-dcm This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files qa-segmentations-dcm This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*) qa-results.csv CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance

    Image segmentations produced by the AIMI Annotations initiative

    No full text
    The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections. To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale . This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images. List of all tasks and IDC collections analyzed. File Segmentation Task IDC Collections LInks breast-fdg-pet-ct.zip FDG-avid lesions in breast from FDG PET/CT scans QIN-Breast model weights github kidney-ct.zip Kidney, Tumor, and Cysts from contrast enhanced CT scans TCGA-KIRC model weights github liver-ct.zip Liver from CT scans TCGA-LIHC model weights github liver-mr.zip Liver from T1 MRI scans TCGA-LIHC model weights github lung-ct.zip Lung and Nodules (3mm-30mm) from CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights 1 model weights 2 github lung-fdg-pet-ct.zip Lungs and FDG-avid lesions in the lung from FDG PET/CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights github prostate-mr.zip Prostate from T2 MRI scans ProstateX model weights github Likert Score Definition 5 Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change) 4 Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable 3 Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome 2 Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch 1 Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable. Each zip file in the collection correlates to a specific segmentation task. The common folder structure is ai-segmentations-dcm This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files qa-segmentations-dcm This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*) qa-results.csv CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance

    Image segmentations produced by the AIMI Annotations initiative

    No full text
    The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections. To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale . This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images. List of all tasks and IDC collections analyzed. File Segmentation Task IDC Collections LInks breast-fdg-pet-ct.zip FDG-avid lesions in breast from FDG PET/CT scans QIN-Breast model weights github kidney-ct.zip Kidney, Tumor, and Cysts from contrast enhanced CT scans TCGA-KIRC model weights github liver-ct.zip Liver from CT scans TCGA-LIHC model weights github liver-mr.zip Liver from T1 MRI scans TCGA-LIHC model weights github lung-ct.zip Lung and Nodules (3mm-30mm) from CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights 1 model weights 2 github lung-fdg-pet-ct.zip Lungs and FDG-avid lesions in the lung from FDG PET/CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights github prostate-mr.zip Prostate from T2 MRI scans ProstateX model weights github Likert Score Definition 5 Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change) 4 Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable 3 Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome 2 Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch 1 Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable. Each zip file in the collection correlates to a specific segmentation task. The common folder structure is ai-segmentations-dcm This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files qa-segmentations-dcm This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*) qa-results.csv CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance

    AI-derived and Manually corrected segmentations for various IDC Collections

    No full text
    The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections. To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale . This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images. List of all tasks and IDC collections analyzed. File Segmentation Task IDC Collections LInks breast-fdg-pet-ct.zip FDG-avid lesions in breast from FDG PET/CT scans QIN-Breast model weights github kidney-ct.zip Kidney, Tumor, and Cysts from contrast enhanced CT scans TCGA-KIRC model weights github liver-ct.zip Liver from CT scans TCGA-LIHC model weights github liver-mr.zip Liver from T1 MRI scans TCGA-LIHC model weights github lung-ct.zip Lung and Nodules (3mm-30mm) from CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights 1 model weights 2 github lung-fdg-pet-ct.zip Lungs and FDG-avid lesions in the lung from FDG PET/CT scans ACRIN-NSCLC-FDG-PET Anti-PD-1-Lung LUNG-PET-CT-Dx NSCLC Radiogenomics RIDER Lung PET-CT TCGA-LUAD TCGA-LUSC model weights github prostate-mr.zip Prostate from T2 MRI scans ProstateX model weights github Likert Score Definition 5 Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change) 4 Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable 3 Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome 2 Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch 1 Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable. Each zip file in the collection correlates to a specific segmentation task. The common folder structure is ai-segmentations-dcm This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files qa-segmentations-dcm This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*) qa-results.csv CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance

    BrainNET: Inference of Brain Network Topology Using Machine Learning

    No full text
    Background:To develop a new functional magnetic resonance image (fMRI) network inference method, BrainNET, that utilizes an efficient machine learning algorithm to quantify contributions of various regions of interests (ROIs) in the brain to a specific ROI. Methods:BrainNET is based on extremely randomized trees to estimate network topology from fMRI data and modified to generate an adjacency matrix representing brain network topology, without reliance on arbitrary thresholds. Open-source simulated fMRI data of 50 subjects in 28 different simulations under various confounding conditions with known ground truth were used to validate the method. Performance was compared with correlation and partial correlation (PC). The real-world performance was then evaluated in a publicly available attention-deficit/hyperactivity disorder (ADHD) data set, including 134 typically developing children (mean age: 12.03, males: 83), 75 ADHD inattentive (mean age: 11.46, males: 56), and 93 ADHD combined (mean age: 11.86, males: 77) subjects. Network topologies in ADHD were inferred using BrainNET, correlation, and PC. Graph metrics were extracted to determine differences between the ADHD groups. Results:BrainNET demonstrated excellent performance across all simulations and varying confounders in identifying the true presence of connections. In the ADHD data set, BrainNET was able to identify significant changes (p < 0.05) in graph metrics between groups. No significant changes in graph metrics between ADHD groups were identified using correlation and PC. Conclusion:We describe BrainNET, a new network inference method to estimate fMRI connectivity that was adapted from gene regulatory methods. BrainNET out-performed Pearson correlation and PC in fMRI simulation data and real-world ADHD data. BrainNET can be used independently or combined with other existing methods as a useful tool to understand network changes and to determine the true network topology of the brain under various conditions and disease states. Impact statement Developed a new functional magnetic resonance image (fMRI) network inference method named as BrainNET using machine learning. BrainNET out-performed Pearson correlation and partial correlation in fMRI simulation data and real-world attention-deficit/hyperactivity disorder data. BrainNET does not need to be pretrained and can be applied to infer fMRI network topology independently on individual subjects and for varying number of nodes.11Nsciescopu

    QU-BraTS : MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results

    No full text
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraT

    QU-BraTS : MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results

    Get PDF
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraT
    corecore