7 research outputs found
Analysis of Connectivity in EMG Signals to Examine Neural Correlations in Muscular Activation of Lower Leg Muscles for Postural Stability
In quiet standing the central nervous systems implements a pre-programmed ankle strategy of postural control to maintain upright balance and stability. This strategy is comprised of a synchronized common neural drive being delivered to synergistically grouped muscles. In this study connectivity between EMG signals of unilateral and bilateral homologous muscle pairs, of the lower legs, during various standing balance conditions was evaluated using magnitude squared coherence (MSC) and mutual information (MI). The leg muscles of interest were the tibialis anterior (TA), medial gastrocnemius (MG), and the soleus (S) of both legs. MSC is a linear measure of the phase relation between two signals in the frequency domain. MI is an information theoretic measure of the amount of information two signals have in common. Both MSC and MI were analyzed in the delta (0.5 β 4 Hz), theta (4 β 8 Hz), alpha (8 β 13 Hz), beta (13 β 30 Hz), and gamma (30 β 100 Hz) neural frequency bands for feet together and feet tandem, with eyes open and eyes closed conditions. Both MSC and MI found that overall connectivity was highest in the delta band followed by the theta band. Connectivity in the beta and lower gamma bands (30 β 60 Hz) was influenced by standing balance condition and indicative of a neural drive originating from the motor cortex. Instability was evaluated by comparing less stable standing conditions with a baseline eyes open, feet together stance. Changes in connectivity in the beta and gamma bands were found be most significant in the muscle pairs of the back leg of tandem stance regardless of foot dominance. MI was found to be a better connectivity analysis method by identifying significance of increased connectivity in the agonistic muscle pair between the MG:S, the antagonistic muscle pair between TA:S, and all the bilateral homologous muscle pairs. MSC was only able to identify the MG:S muscle pair as significant. The results of this study provided insight into the neural mechanism of postural control and presented an alternative connectivity analysis method of MI
Improving Lesion Segmentation in FDG-18 Whole-Body PET/CT scans using Multilabel approach: AutoPET II challenge
Automatic segmentation of lesions in FDG-18 Whole Body (WB) PET/CT scans
using deep learning models is instrumental for determining treatment response,
optimizing dosimetry, and advancing theranostic applications in oncology.
However, the presence of organs with elevated radiotracer uptake, such as the
liver, spleen, brain, and bladder, often leads to challenges, as these regions
are often misidentified as lesions by deep learning models. To address this
issue, we propose a novel approach of segmenting both organs and lesions,
aiming to enhance the performance of automatic lesion segmentation methods. In
this study, we assessed the effectiveness of our proposed method using the
AutoPET II challenge dataset, which comprises 1014 subjects. We evaluated the
impact of inclusion of additional labels and data in the segmentation
performance of the model. In addition to the expert-annotated lesion labels, we
introduced eight additional labels for organs, including the liver, kidneys,
urinary bladder, spleen, lung, brain, heart, and stomach. These labels were
integrated into the dataset, and a 3D UNET model was trained within the nnUNet
framework. Our results demonstrate that our method achieved the top ranking in
the held-out test dataset, underscoring the potential of this approach to
significantly improve lesion segmentation accuracy in FDG-18 Whole-Body PET/CT
scans, ultimately benefiting cancer patients and advancing clinical practice.Comment: AutoPET II challenge pape
AI-derived and Manually corrected segmentations for various IDC Collections
The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections.
To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale .
This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images.
List of all tasks and IDC collections analyzed.
File
Segmentation Task
IDC Collections
LInks
breast-fdg-pet-ct.zip
FDG-avid lesions in breast from FDG PET/CT scans
QIN-Breast
model weights
github
kidney-ct.zip
Kidney, Tumor, and Cysts from contrast enhanced CT scans
TCGA-KIRC
model weights
github
liver-ct.zip
Liver from CT scans
TCGA-LIHC
model weights
github
liver-mr.zip
Liver from T1 MRI scans
TCGA-LIHC
model weights
github
lung-ct.zip
Lung and Nodules (3mm-30mm) from CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights 1
model weights 2
github
lung-fdg-pet-ct.zip
Lungs and FDG-avid lesions in the lung from FDG PET/CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights
github
prostate-mr.zip
Prostate from T2 MRI scans
ProstateX
model weights
github
Likert Score
Definition
5
Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change)
4
Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable
3
Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome
2
Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch
1
Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable.
Each zip file in the collection correlates to a specific segmentation task. The common folder structure is
ai-segmentations-dcm
This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files
qa-segmentations-dcm
This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*)
qa-results.csv
CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance
Image segmentations produced by the AIMI Annotations initiative
The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections.
To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale .
This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images.
List of all tasks and IDC collections analyzed.
File
Segmentation Task
IDC Collections
LInks
breast-fdg-pet-ct.zip
FDG-avid lesions in breast from FDG PET/CT scans
QIN-Breast
model weights
github
kidney-ct.zip
Kidney, Tumor, and Cysts from contrast enhanced CT scans
TCGA-KIRC
model weights
github
liver-ct.zip
Liver from CT scans
TCGA-LIHC
model weights
github
liver-mr.zip
Liver from T1 MRI scans
TCGA-LIHC
model weights
github
lung-ct.zip
Lung and Nodules (3mm-30mm) from CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights 1
model weights 2
github
lung-fdg-pet-ct.zip
Lungs and FDG-avid lesions in the lung from FDG PET/CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights
github
prostate-mr.zip
Prostate from T2 MRI scans
ProstateX
model weights
github
Likert Score
Definition
5
Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change)
4
Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable
3
Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome
2
Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch
1
Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable.
Each zip file in the collection correlates to a specific segmentation task. The common folder structure is
ai-segmentations-dcm
This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files
qa-segmentations-dcm
This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*)
qa-results.csv
CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance
AI-derived and Manually corrected segmentations for various IDC Collections
The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections.
To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale .
This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images.
List of all tasks and IDC collections analyzed.
File
Segmentation Task
IDC Collections
LInks
breast-fdg-pet-ct.zip
FDG-avid lesions in breast from FDG PET/CT scans
QIN-Breast
model weights
github
kidney-ct.zip
Kidney, Tumor, and Cysts from contrast enhanced CT scans
TCGA-KIRC
model weights
github
liver-ct.zip
Liver from CT scans
TCGA-LIHC
model weights
github
liver-mr.zip
Liver from T1 MRI scans
TCGA-LIHC
model weights
github
lung-ct.zip
Lung and Nodules (3mm-30mm) from CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights 1
model weights 2
github
lung-fdg-pet-ct.zip
Lungs and FDG-avid lesions in the lung from FDG PET/CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights
github
prostate-mr.zip
Prostate from T2 MRI scans
ProstateX
model weights
github
Likert Score
Definition
5
Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change)
4
Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable
3
Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome
2
Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch
1
Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable.
Each zip file in the collection correlates to a specific segmentation task. The common folder structure is
ai-segmentations-dcm
This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files
qa-segmentations-dcm
This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*)
qa-results.csv
CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance
Image segmentations produced by the AIMI Annotations initiative
The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections.
To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale .
This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images.
List of all tasks and IDC collections analyzed.
File
Segmentation Task
IDC Collections
LInks
breast-fdg-pet-ct.zip
FDG-avid lesions in breast from FDG PET/CT scans
QIN-Breast
model weights
github
kidney-ct.zip
Kidney, Tumor, and Cysts from contrast enhanced CT scans
TCGA-KIRC
model weights
github
liver-ct.zip
Liver from CT scans
TCGA-LIHC
model weights
github
liver-mr.zip
Liver from T1 MRI scans
TCGA-LIHC
model weights
github
lung-ct.zip
Lung and Nodules (3mm-30mm) from CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights 1
model weights 2
github
lung-fdg-pet-ct.zip
Lungs and FDG-avid lesions in the lung from FDG PET/CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights
github
prostate-mr.zip
Prostate from T2 MRI scans
ProstateX
model weights
github
Likert Score
Definition
5
Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change)
4
Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable
3
Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome
2
Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch
1
Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable.
Each zip file in the collection correlates to a specific segmentation task. The common folder structure is
ai-segmentations-dcm
This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files
qa-segmentations-dcm
This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*)
qa-results.csv
CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance
Image segmentations produced by the AIMI Annotations initiative
The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provides an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections.
To validate the models performance, roughly 10% of the predictions were manually reviewed and corrected by both a board certified radiologist and a medical student (non-expert). Additionally, this non-expert looked at all the ai predictions and rated them on a 5 point Likert scale .
This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images.
List of all tasks and IDC collections analyzed.
File
Segmentation Task
IDC Collections
LInks
breast-fdg-pet-ct.zip
FDG-avid lesions in breast from FDG PET/CT scans
QIN-Breast
model weights
github
kidney-ct.zip
Kidney, Tumor, and Cysts from contrast enhanced CT scans
TCGA-KIRC
model weights
github
liver-ct.zip
Liver from CT scans
TCGA-LIHC
model weights
github
liver-mr.zip
Liver from T1 MRI scans
TCGA-LIHC
model weights
github
lung-ct.zip
Lung and Nodules (3mm-30mm) from CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights 1
model weights 2
github
lung-fdg-pet-ct.zip
Lungs and FDG-avid lesions in the lung from FDG PET/CT scans
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
model weights
github
prostate-mr.zip
Prostate from T2 MRI scans
ProstateX
model weights
github
Likert Score
Definition
5
Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change)
4
Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable
3
Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome
2
Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch
1
Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable.
Each zip file in the collection correlates to a specific segmentation task. The common folder structure is
ai-segmentations-dcm
This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files
qa-segmentations-dcm
This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*)
qa-results.csv
CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance