5 research outputs found
CUTS: A Fully Unsupervised Framework for Medical Image Segmentation
In this work we introduce CUTS (Contrastive and Unsupervised Training for
Segmentation) the first fully unsupervised deep learning framework for medical
image segmentation, facilitating the use of the vast majority of imaging data
that is not labeled or annotated. Segmenting medical images into regions of
interest is a critical task for facilitating both patient diagnoses and
quantitative research. A major limiting factor in this segmentation is the lack
of labeled data, as getting expert annotations for each new set of imaging data
or task can be expensive, labor intensive, and inconsistent across annotators:
thus, we utilize self-supervision based on pixel-centered patches from the
images themselves. Our unsupervised approach is based on a training objective
with both contrastive learning and autoencoding aspects. Previous contrastive
learning approaches for medical image segmentation have focused on image-level
contrastive training, rather than our intra-image patch-level approach or have
used this as a pre-training task where the network needed further supervised
training afterwards. By contrast, we build the first entirely unsupervised
framework that operates at the pixel-centered-patch level. Specifically, we add
novel augmentations, a patch reconstruction loss, and introduce a new pixel
clustering and identification framework. Our model achieves improved results on
several key medical imaging tasks, as verified by held-out expert annotations
on the task of segmenting geographic atrophy (GA) regions of images of the
retina
Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models
Comparison of volumetric and 2D-based response methods in the PNOC-001 pediatric low-grade glioma clinical trial
Background: Although response in pediatric low-grade glioma (pLGG) includes volumetric assessment, more simplified 2D-based methods are often used in clinical trials. The study’s purpose was to compare volumetric to 2D methods.
Methods: An expert neuroradiologist performed solid and whole tumor (including cyst and edema) volumetric measurements on MR images using a PACS-based manual segmentation tool in 43 pLGG participants (213 total follow-up images) from the Pacific Pediatric Neuro-Oncology Consortium (PNOC-001) trial. Classification based on changes in volumetric and 2D measurements of solid tumor were compared to neuroradiologist visual response assessment using the Brain Tumor Reporting and Data System (BT-RADS) criteria for a subset of 65 images using receiver operating characteristic (ROC) analysis. Longitudinal modeling of solid tumor volume was used to predict BT-RADS classification in 54 of the 65 images.
Results: There was a significant difference in ROC area under the curve between 3D solid tumor volume and 2D area (0.96 vs 0.78, P = .005) and between 3D solid and 3D whole volume (0.96 vs 0.84, P = .006) when classifying BT-RADS progressive disease (PD). Thresholds of 15–25% increase in 3D solid tumor volume had an 80% sensitivity in classifying BT-RADS PD included in their 95% confidence intervals. The longitudinal model of solid volume response had a sensitivity of 82% and a positive predictive value of 67% for detecting BT-RADS PD.
Conclusions: Volumetric analysis of solid tumor was significantly better than 2D measurements in classifying tumor progression as determined by BT-RADS criteria and will enable more comprehensive clinical management.ISSN:2632-249
Application of novel PACS-based informatics platform to identify imaging based predictors of CDKN2A allelic status in glioblastomas
Abstract Gliomas with CDKN2A mutations are known to have worse prognosis but imaging features of these gliomas are unknown. Our goal is to identify CDKN2A specific qualitative imaging biomarkers in glioblastomas using a new informatics workflow that enables rapid analysis of qualitative imaging features with Visually AcceSAble Rembrandtr Images (VASARI) for large datasets in PACS. Sixty nine patients undergoing GBM resection with CDKN2A status determined by whole-exome sequencing were included. GBMs on magnetic resonance images were automatically 3D segmented using deep learning algorithms incorporated within PACS. VASARI features were assessed using FHIR forms integrated within PACS. GBMs without CDKN2A alterations were significantly larger (64 vs. 30%, p = 0.007) compared to tumors with homozygous deletion (HOMDEL) and heterozygous loss (HETLOSS). Lesions larger than 8 cm were four times more likely to have no CDKN2A alteration (OR: 4.3; 95% CI 1.5–12.1; p < 0.001). We developed a novel integrated PACS informatics platform for the assessment of GBM molecular subtypes and show that tumors with HOMDEL are more likely to have radiographic evidence of pial invasion and less likely to have deep white matter invasion or subependymal invasion. These imaging features may allow noninvasive identification of CDKN2A allele status
The Brain Tumor Segmentation (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI
Clinical monitoring of metastatic disease to the brain can be a laborious and timeconsuming process, especially in cases involving multiple metastases when the assessment is performed manually. The Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) guideline, which utilizes the unidimensional longest diameter, is commonly used in clinical and research settings to evaluate response to therapy in patients with brain metastases. However, accurate volumetric assessment of the lesion and surrounding peri-lesional edema holds significant importance in clinical decision-making and can greatly enhance outcome prediction. The unique challenge in performing segmentations of brain metastases lies in their common occurrence as small lesions. Detection and segmentation of lesions that are smaller than 10 mm in size has not demonstrated high accuracy in prior publications. The brain metastases challenge sets itself apart from previously conducted MICCAI challenges on glioma segmentation due to the significant variability in lesion size. Unlike gliomas, which tend to be larger on presentation scans, brain metastases exhibit a wide range of sizes and tend to include small lesions. We hope that the BraTS-METS dataset and challenge will advance the field of automated brain metastasis detection and segmentation