28 research outputs found
Semi-Supervised Deep Learning for Multi-Tissue Segmentation from Multi-Contrast MRI
Segmentation of thigh tissues (muscle, fat, inter-muscular adipose tissue
(IMAT), bone, and bone marrow) from magnetic resonance imaging (MRI) scans is
useful for clinical and research investigations in various conditions such as
aging, diabetes mellitus, obesity, metabolic syndrome, and their associated
comorbidities. Towards a fully automated, robust, and precise quantification of
thigh tissues, herein we designed a novel semi-supervised segmentation
algorithm based on deep network architectures. Built upon Tiramisu segmentation
engine, our proposed deep networks use variational and specially designed
targeted dropouts for faster and robust convergence, and utilize multi-contrast
MRI scans as input data. In our experiments, we have used 150 scans from 50
distinct subjects from the Baltimore Longitudinal Study of Aging (BLSA). The
proposed system made use of both labeled and unlabeled data with high efficacy
for training, and outperformed the current state-of-the-art methods with dice
scores of 97.52%, 94.61%, 80.14%, 95.93%, and 96.83% for muscle, fat, IMAT,
bone, and bone marrow tissues, respectively. Our results indicate that the
proposed system can be useful for clinical research studies where volumetric
and distributional tissue quantification is pivotal and labeling is a
significant issue. To the best of our knowledge, the proposed system is the
first attempt at multi-tissue segmentation using a single end-to-end
semi-supervised deep learning framework for multi-contrast thigh MRI scans.Comment: 20 pages, 9 figures, Journal of Signal Processing System
The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized Dataset
Purpose: To organize a knee MRI segmentation challenge for characterizing the
semantic and clinical efficacy of automatic segmentation methods relevant for
monitoring osteoarthritis progression.
Methods: A dataset partition consisting of 3D knee MRI from 88 subjects at
two timepoints with ground-truth articular (femoral, tibial, patellar)
cartilage and meniscus segmentations was standardized. Challenge submissions
and a majority-vote ensemble were evaluated using Dice score, average symmetric
surface distance, volumetric overlap error, and coefficient of variation on a
hold-out test set. Similarities in network segmentations were evaluated using
pairwise Dice correlations. Articular cartilage thickness was computed per-scan
and longitudinally. Correlation between thickness error and segmentation
metrics was measured using Pearson's coefficient. Two empirical upper bounds
for ensemble performance were computed using combinations of model outputs that
consolidated true positives and true negatives.
Results: Six teams (T1-T6) submitted entries for the challenge. No
significant differences were observed across all segmentation metrics for all
tissues (p=1.0) among the four top-performing networks (T2, T3, T4, T6). Dice
correlations between network pairs were high (>0.85). Per-scan thickness errors
were negligible among T1-T4 (p=0.99) and longitudinal changes showed minimal
bias (<0.03mm). Low correlations (<0.41) were observed between segmentation
metrics and thickness error. The majority-vote ensemble was comparable to top
performing networks (p=1.0). Empirical upper bound performances were similar
for both combinations (p=1.0).
Conclusion: Diverse networks learned to segment the knee similarly where high
segmentation accuracy did not correlate to cartilage thickness accuracy. Voting
ensembles did not outperform individual networks but may help regularize
individual models.Comment: Submitted to Radiology: Artificial Intelligence; Fixed typo
Deep Learning Approaches with Digital Mammography for Evaluating Breast Cancer Risk, a Narrative Review
Breast cancer remains the leading cause of cancer-related deaths in women worldwide. Current screening regimens and clinical breast cancer risk assessment models use risk factors such as demographics and patient history to guide policy and assess risk. Applications of artificial intelligence methods (AI) such as deep learning (DL) and convolutional neural networks (CNNs) to evaluate individual patient information and imaging showed promise as personalized risk models. We reviewed the current literature for studies related to deep learning and convolutional neural networks with digital mammography for assessing breast cancer risk. We discussed the literature and examined the ongoing and future applications of deep learning techniques in breast cancer risk modeling
Recommended from our members
A novel CNN algorithm for pathological complete response prediction using an I-SPY TRIAL breast MRI database
PurposeTo apply our convolutional neural network (CNN) algorithm to predict neoadjuvant chemotherapy (NAC) response using the I-SPY TRIAL breast MRI dataset.MethodsFrom the I-SPY TRIAL breast MRI database, 131 patients from 9 institutions were successfully downloaded for analysis. First post-contrast MRI images were used for 3D segmentation using 3D slicer. Our CNN was implemented entirely of 3 × 3 convolutional kernels and linear layers. The convolutional kernels consisted of 6 residual layers, totaling 12 convolutional layers. Dropout with a 0.5 keep probability and L2 normalization was utilized. Training was implemented by using the Adam optimizer. A 5-fold cross validation was used for performance evaluation. Software code was written in Python using the TensorFlow module on a Linux workstation with one NVidia Titan X GPU.ResultsOf 131 patients, 40 patients achieved pCR following NAC (group 1) and 91 patients did not achieve pCR following NAC (group 2). Diagnostic accuracy of our CNN two classification model distinguishing patients with pCR vs non-pCR was 72.5 (SD ± 8.4), with sensitivity 65.5% (SD ± 28.1) and specificity of 78.9% (SD ± 15.2). The area under a ROC Curve (AUC) was 0.72 (SD ± 0.08).ConclusionIt is feasible to use our CNN algorithm to predict NAC response in patients using a multi-institution dataset
Recommended from our members
Convolutional Neural Networks for the Detection and Measurement of Cerebral Aneurysms on Magnetic Resonance Angiography.
Aneurysm size correlates with rupture risk and is important for treatment planning. User annotation of aneurysm size is slow and tedious, particularly for large data sets. Geometric shortcuts to compute size have been shown to be inaccurate, particularly for nonstandard aneurysm geometries. To develop and train a convolutional neural network (CNN) to detect and measure cerebral aneurysms from magnetic resonance angiography (MRA) automatically and without geometric shortcuts. In step 1, a CNN based on the U-net architecture was trained on 250 MRA maximum intensity projection (MIP) images, then applied to a testing set. In step 2, the trained CNN was applied to a separate set of 14 basilar tip aneurysms for size prediction. Step 1-the CNN successfully identified aneurysms in 85/86 (98.8% of) testing set cases, with a receiver operating characteristic (ROC) area-under-the-curve of 0.87. Step 2-automated basilar tip aneurysm linear size differed from radiologist-traced aneurysm size on average by 2.01 mm, or 30%. The CNN aneurysm area differed from radiologist-derived area on average by 8.1 mm2 or 27%. CNN correctly predicted the area trend for the set of aneurysms. This approach is to our knowledge the first using CNNs to derive aneurysm size. In particular, we demonstrate the clinically pertinent application of computing maximal aneurysm one-dimensional size and two-dimensional area. We propose that future work can apply this to facilitate pre-treatment planning and possibly identify previously missed aneurysms in retrospective assessment
Recommended from our members
Convolutional Neural Network Based Breast Cancer Risk Stratification Using a Mammographic Dataset.
RATIONALE AND OBJECTIVES: We propose a novel convolutional neural network derived pixel-wise breast cancer risk model using mammographic dataset. MATERIALS AND METHODS: An institutional review board approved retrospective case-control study of 1474 mammographic images was performed in average risk women. First, 210 patients with new incidence of breast cancer were identified. Mammograms from these patients prior to developing breast cancer were identified and made up the case group [420 bilateral craniocaudal mammograms]. The control group consisted of 527 patients without breast cancer from the same time period. Prior mammograms from these patients made up the control group [1054 bilateral craniocaudal mammograms]. A convolutional neural network (CNN) architecture was designed for pixel-wise breast cancer risk prediction. Briefly, each mammogram was normalized as a map of z-scores and resized to an input image size of 256 × 256. Then a contracting and expanding fully convolutional CNN architecture was composed entirely of 3 × 3 convolutions, a total of four strided convolutions instead of pooling layers, and symmetric residual connections. L2 regularization and augmentation methods were implemented to prevent overfitting. Cases were separated into training (80%) and test sets (20%). A 5-fold cross validation was performed. Software code was written in Python using the TensorFlow module on a Linux workstation with NVIDIA GTX 1070 Pascal GPU. RESULTS: The average age of patients between the case and the control groups was not statistically different [case: 57.4years (SD, 10.4) and control: 58.2years (SD, 10.9), p = 0.33]. Breast Density (BD) was significantly higher in the case group [2.39 (SD, 0.7)] than the control group [1.98 (SD, 0.75), p < 0.0001]. On multivariate logistic regression analysis, both CNN pixel-wise mammographic risk model and BD were significant independent predictors of breast cancer risk (p < 0.0001). The CNN risk model showed greater predictive potential [OR = 4.42 (95% CI, 3.4-5.7] compared to BD [OR = 1.67 (95% CI, 1.4-1.9). The CNN risk model achieved an overall accuracy of 72% (95%CI, 69.8-74.4) in predicting patients in the case group. CONCLUSION: Novel pixel-wise mammographic breast evaluation using a CNN architecture can stratify breast cancer risk, independent of the BD. Larger dataset will likely improve our model