28 research outputs found

    Semi-Supervised Deep Learning for Multi-Tissue Segmentation from Multi-Contrast MRI

    Full text link
    Segmentation of thigh tissues (muscle, fat, inter-muscular adipose tissue (IMAT), bone, and bone marrow) from magnetic resonance imaging (MRI) scans is useful for clinical and research investigations in various conditions such as aging, diabetes mellitus, obesity, metabolic syndrome, and their associated comorbidities. Towards a fully automated, robust, and precise quantification of thigh tissues, herein we designed a novel semi-supervised segmentation algorithm based on deep network architectures. Built upon Tiramisu segmentation engine, our proposed deep networks use variational and specially designed targeted dropouts for faster and robust convergence, and utilize multi-contrast MRI scans as input data. In our experiments, we have used 150 scans from 50 distinct subjects from the Baltimore Longitudinal Study of Aging (BLSA). The proposed system made use of both labeled and unlabeled data with high efficacy for training, and outperformed the current state-of-the-art methods with dice scores of 97.52%, 94.61%, 80.14%, 95.93%, and 96.83% for muscle, fat, IMAT, bone, and bone marrow tissues, respectively. Our results indicate that the proposed system can be useful for clinical research studies where volumetric and distributional tissue quantification is pivotal and labeling is a significant issue. To the best of our knowledge, the proposed system is the first attempt at multi-tissue segmentation using a single end-to-end semi-supervised deep learning framework for multi-contrast thigh MRI scans.Comment: 20 pages, 9 figures, Journal of Signal Processing System

    The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized Dataset

    Full text link
    Purpose: To organize a knee MRI segmentation challenge for characterizing the semantic and clinical efficacy of automatic segmentation methods relevant for monitoring osteoarthritis progression. Methods: A dataset partition consisting of 3D knee MRI from 88 subjects at two timepoints with ground-truth articular (femoral, tibial, patellar) cartilage and meniscus segmentations was standardized. Challenge submissions and a majority-vote ensemble were evaluated using Dice score, average symmetric surface distance, volumetric overlap error, and coefficient of variation on a hold-out test set. Similarities in network segmentations were evaluated using pairwise Dice correlations. Articular cartilage thickness was computed per-scan and longitudinally. Correlation between thickness error and segmentation metrics was measured using Pearson's coefficient. Two empirical upper bounds for ensemble performance were computed using combinations of model outputs that consolidated true positives and true negatives. Results: Six teams (T1-T6) submitted entries for the challenge. No significant differences were observed across all segmentation metrics for all tissues (p=1.0) among the four top-performing networks (T2, T3, T4, T6). Dice correlations between network pairs were high (>0.85). Per-scan thickness errors were negligible among T1-T4 (p=0.99) and longitudinal changes showed minimal bias (<0.03mm). Low correlations (<0.41) were observed between segmentation metrics and thickness error. The majority-vote ensemble was comparable to top performing networks (p=1.0). Empirical upper bound performances were similar for both combinations (p=1.0). Conclusion: Diverse networks learned to segment the knee similarly where high segmentation accuracy did not correlate to cartilage thickness accuracy. Voting ensembles did not outperform individual networks but may help regularize individual models.Comment: Submitted to Radiology: Artificial Intelligence; Fixed typo

    Deep Learning Approaches with Digital Mammography for Evaluating Breast Cancer Risk, a Narrative Review

    No full text
    Breast cancer remains the leading cause of cancer-related deaths in women worldwide. Current screening regimens and clinical breast cancer risk assessment models use risk factors such as demographics and patient history to guide policy and assess risk. Applications of artificial intelligence methods (AI) such as deep learning (DL) and convolutional neural networks (CNNs) to evaluate individual patient information and imaging showed promise as personalized risk models. We reviewed the current literature for studies related to deep learning and convolutional neural networks with digital mammography for assessing breast cancer risk. We discussed the literature and examined the ongoing and future applications of deep learning techniques in breast cancer risk modeling
    corecore