19 research outputs found

    Fibroglandular Tissue Segmentation in Breast MRI using Vision Transformers -- A multi-institutional evaluation

    Full text link
    Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909ยฑ\pm0.069 versus 0.916ยฑ\pm0.067, P<0.001) and on the external testset (0.824ยฑ\pm0.144 versus 0.864ยฑ\pm0.081, P=0.004). Moreover, the average symmetric surface distance was higher (=worse) for nnUNet than for TraBS on the internal (0.657ยฑ\pm2.856 versus 0.548ยฑ\pm2.195, P=0.001) and on the external testset (0.727ยฑ\pm0.620 versus 0.584ยฑ\pm0.413, P=0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening

    ์œ ๋ฐฉ ์ดฌ์˜์ˆ  ์˜์ƒ ์ž๋ฃŒ์˜ ๋”ฅ๋Ÿฌ๋‹ ์ ์šฉ์„ ํ†ตํ•œ ์œ ๋ฐฉ์•” ์œ„ํ—˜๋„ ํ‰๊ฐ€ : ์œ ๋ฐฉ ์น˜๋ฐ€๋„ ์ž๋™ ํ‰๊ฐ€ ๋ฐฉ๋ฒ• ๊ธฐ๋ฐ˜

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๋ณด๊ฑด๋Œ€ํ•™์› ๋ณด๊ฑดํ•™๊ณผ,2019. 8. ์„ฑ์ฃผํ—Œ.Introduction : Mammographic density adjusted for age and body mass index (BMI) is the most predictive marker of breast cancer after familial causes and genetic markers. The aim of this study was to develop deep learning (DL) algorithm to assess mammographic density. Methods : Total 2464 participants (834 cases and 1630 controls) were collected from Asan Medical Center and Samsung Medical Center, Korea. Cranio-caudal view mammographic images were obtained using full-field digital mammography system. Mammographic densities were measured using CUMULUS software. The resulting DL algorithm was tested on a held-out test set of 493 women. Agreement on DL and expert was assessed with correlation coefficient and weighted ฮบ statistics. Risk associations of DL measures were evaluated with area under curve (AUC) and odds per adjusted standard deviation (OPERA). Results : The DL model showed very good agreement with expert for both percent density and dense area (r = 0.94 - 0.96 and ฮบ = 0.89 - 0.91). Risk associations of DL measures were comparable to manual measures of expert. DL measures adjusted for age and BMI showed strong risk associations with breast cancer (OPERA = 1.51 - 1.63 and AUC = 0.61 - 0.64). Conclusions : DL model can be used to measure mammographic density which is a strong risk factor of breast cancer. This study showed the potential of DL algorithm as a mammogram-based risk prediction model in breast cancer screening test.์œ ๋ฐฉ ๋‚ด ์œ ๋ฐฉ ์‹ค์งˆ ์กฐ์ง์˜ ์–‘์„ ๋ฐ˜์˜ํ•˜๋Š” ์œ ๋ฐฉ ๋ฐ€๋„๋Š” ๋ง˜๋ชจ๊ทธ๋žจ์—์„œ ๋‚˜ํƒ€๋‚˜๋Š” ๋ฐ์€ ๋ถ€๋ถ„์œผ๋กœ ์ •์˜๋˜๋ฉฐ, ์œ ๋ฐฉ์•”์˜ ๊ฐ•๋ ฅํ•œ ์œ„ํ—˜์ธ์ž๋กœ ๋„๋ฆฌ ์•Œ๋ ค์ ธ ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ์œ ๋ฐฉ ๋ฐ€๋„๋Š” ์ธก์ •ํ•˜๋Š”๋ฐ ์‹œ๊ฐ„๊ณผ ๋น„์šฉ์ด ๋งŽ์ด ๋“ ๋‹ค๋Š” ๋‹จ์ ์œผ๋กœ ์ธํ•ด ์œ ๋ฐฉ์•” ๊ฒ€์ง„ ๊ณผ์ •์—์„œ ์ œํ•œ์ ์œผ๋กœ ์‚ฌ์šฉ๋ผ ์™”๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ๋ชฉ์ ์€ ์œ ๋ฐฉ์•” ๊ฒ€์ง„์—์„œ ์œ ๋ฐฉ์•” ์˜ˆ์ธก ๋ชจํ˜•์— ํฌํ•จํ•ด ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์œ ๋ฐฉ ๋ฐ€๋„ ์ธก์ •์น˜๋ฅผ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์•„์‚ฐ ๋ณ‘์›๊ณผ ์‚ผ์„ฑ ์„œ์šธ๋ณ‘์›์˜ ์œ ๋ฐฉ์•” ๊ฒ€์ง„ ์ž๋ฃŒ๋กœ๋ถ€ํ„ฐ ์ˆ˜์ง‘๋œ ์ด 2464 ๋ช…์˜ ์ฐธ์—ฌ์ž (ํ™˜์ž: 834 ๋ช…, ๋Œ€์กฐ๊ตฐ : 1630 ๋ช…) ๋ฅผ ๋Œ€์ƒ์œผ๋กœ ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ํ™˜์ž์˜ ๊ฒฝ์šฐ ๋ณ‘๋ณ€์ด ๋ฐœ์ƒํ•œ ์œ ๋ฐฉ์˜ ๋ฐ˜๋Œ€์ชฝ ์œ ๋ฐฉ, ๋Œ€์กฐ๊ตฐ์˜ ๊ฒฝ์šฐ ์ž„์˜๋กœ ๊ณ ๋ฅธ ์œ ๋ฐฉ์„ ๋Œ€์ƒ์œผ๋กœ ์œ ๋ฐฉ ๋ฐ€๋„ ์ธก์ •์— 5๋…„ ์ด์ƒ์˜ ๊ฒฝ๋ ฅ์„ ๊ฐ€์ง„ ์ „๋ฌธ๊ฐ€๊ฐ€ CUMULUS ํ”„๋กœ๊ทธ๋žจ์„ ํ™œ์šฉํ•˜์—ฌ ์œ ๋ฐฉ ๋ฐ€๋„ (์น˜๋ฐ€ ์œ ๋ฐฉ ๋ถ€์œ„, cm2 ๋ฐ ์น˜๋ฐ€๋„ ๋ฐฑ๋ถ„์œจ, %) ๋ฅผ ์ธก์ •ํ•˜์˜€๋‹ค. ์ด ์ „๋ฌธ๊ฐ€ ์ธก์ •์น˜๋ฅผ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋กœ ํ•˜์—ฌ ์™„์ „ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง (Fully Convolutional Network) ๊ธฐ๋ฐ˜ ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜์˜€๊ณ , ์ด๋ฅผ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์ ์šฉํ•ด ์ „๋ฌธ๊ฐ€ ์ธก์ •์น˜์™€์˜ ์ผ์น˜๋„ ๋ฐ ์œ ๋ฐฉ์•” ์˜ˆ์ธก๋ ฅ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์€ ์ „๋ฌธ๊ฐ€์™€ ๋†’์€ ์ผ์น˜๋„ (r = 0.94 - 0.96, weighted ฮบ = 0.89 โ€“ 0.91) ๋ฅผ ๋ณด์˜€๋‹ค. ๋˜ํ•œ ๋‚˜์ด์™€ BMI๋ฅผ ๋ณด์ •ํ•œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ธก์ •์น˜์˜ ์œ ๋ฐฉ์•” ์˜ˆ์ธก๋ ฅ์„ ํ‰๊ฐ€ํ•œ ๊ฒฐ๊ณผ, ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์ด ์ „๋ฌธ๊ฐ€์™€ ๋น„์Šทํ•œ ์ˆ˜์ค€์˜ ์˜ˆ์ธก๋ ฅ์„ ๊ฐ–๋Š”๋‹ค๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค (์ „๋ฌธ๊ฐ€, AUC = 0.62 โ€“ 0.63, ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ, AUC = 0.61 โ€“ 0.64). ๋ณธ ์—ฐ๊ตฌ๋Š” ๋”ฅ๋Ÿฌ๋‹์ด ํ˜„์žฌ์˜ ๋…ธ๋™ ์ง‘์•ฝ์ ์ธ ์œ ๋ฐฉ ๋ฐ€๋„ ์ธก์ •๋ฒ•์„ ๋ณด์™„ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ์ด๋Š” ๋น„์šฉ-ํšจ์œจ์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ ์œ ๋ฐฉ ๋ฐ€๋„ ์ธก์ •์น˜๋ฅผ ์œ ๋ฐฉ์•” ์˜ˆ์ธก ๋ชจํ˜•์— ํฌํ•จ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ๊ธฐํšŒ๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๋ง˜๋ชจ๊ทธ๋žจ ๊ธฐ๋ฐ˜ ์œ ๋ฐฉ์•” ์œ„ํ—˜๋„ ์˜ˆ์ธก ๋ชจํ˜•์ด ์œ ๋ฐฉ์•” ๊ฒ€์ง„ ๊ณผ์ •์— ์ ์šฉ๋œ๋‹ค๋ฉด ๋ณด๋‹ค ์ •๋ฐ€ํ•œ ์œ ๋ฐฉ์•” ์œ„ํ—˜๋„ ํ‰๊ฐ€๋ฅผ ํ†ตํ•ด ํšจ๊ณผ์ ์œผ๋กœ ์œ ๋ฐฉ์•” ๊ณ ์œ„ํ—˜๊ตฐ์„ ์„ ๋ณ„ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ณ ์œ„ํ—˜๊ตฐ์— ๋Œ€ํ•œ ๋งž์ถคํ˜• ์˜ˆ๋ฐฉ ์ „๋žต์ด ์ ์šฉ๋œ๋‹ค๋ฉด ์žฅ๊ธฐ์ ์œผ๋กœ ์œ ๋ฐฉ์•” ์กฐ๊ธฐ ๋ฐœ๊ฒฌ ๋ฐ ์‚ฌ๋ง๋ฅ  ๊ฐ์†Œ์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•œ๋‹ค.1 Introduction 1 2 Materials and Methods 3 2.1 Data collection 3 2.2 Measurement of mammographic density 4 2.3 Development of DL model 6 2.3.1 Establishing ground truth 6 2.3.2 Image preprocessing 6 2.3.3 Establishing DL model 6 2.3.4 Estimation of mammographic density 11 2.4 Statistical methods 14 2.4.1 Agreement statistics 14 2.4.2 Evaluation of risk association 15 3 Results 16 3.1 Characteristics of study participants 16 3.2 Agreement of DL model 17 3.3 Breast cancer risk profiles 21 4 Discussion 24 Bibliography 26 ์ดˆ๋ก 29Maste

    3D Deep Learning on Medical Images: A Review

    Full text link
    The rapid advancements in machine learning, graphics processing technologies and availability of medical imaging data has led to a rapid increase in use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, give a brief mathematical description of 3D CNN and the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection, and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models, in general) and possible future trends in the field.Comment: 13 pages, 4 figures, 2 table

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with cliniciansโ€™ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operationsโ€”such as segmentation, co-registration, classification, and dimensionality reductionโ€”and multi-omics data integration.

    Developing Novel Computer Aided Diagnosis Schemes for Improved Classification of Mammography Detected Masses

    Get PDF
    Mammography imaging is a population-based breast cancer screening tool that has greatly aided in the decrease in breast cancer mortality over time. Although mammography is the most frequently employed breast imaging modality, its performance is often unsatisfactory with low sensitivity and high false positive rates. This is due to the fact that reading and interpreting mammography images remains difficult due to the heterogeneity of breast tumors and dense overlapping fibroglandular tissue. To help overcome these clinical challenges, researchers have made great efforts to develop computer-aided detection and/or diagnosis (CAD) schemes to provide radiologists with decision-making support tools. In this dissertation, I investigate several novel methods for improving the performance of a CAD system in distinguishing between malignant and benign masses. The first study, we test the hypothesis that handcrafted radiomics features and deep learning features contain complementary information, therefore the fusion of these two types of features will increase the feature representation of each mass and improve the performance of CAD system in distinguishing malignant and benign masses. Regions of interest (ROI) surrounding suspicious masses are extracted and two types of features are computed. The first set consists of 40 radiomic features and the second set includes deep learning (DL) features computed from a pretrained VGG16 network. DL features are extracted from two pseudo color image sets, producing a total of three feature vectors after feature extraction, namely: handcrafted, DL-stacked, DL-pseudo. Linear support vector machines (SVM) are trained using each feature set alone and in combinations. Results show that the fusion CAD system significantly outperforms the systems using either feature type alone (AUC=0.756ยฑ0.042 p<0.05). This study demonstrates that both handcrafted and DL futures contain useful complementary information and that fusion of these two types of features increases the CAD classification performance. In the second study, we expand upon our first study and develop a novel CAD framework that fuses information extracted from ipsilateral views of bilateral mammograms using both DL and radiomics feature extraction methods. Each case in this study is represented by four images which includes the craniocaudal (CC) and mediolateral oblique (MLO) view of left and right breast. First, we extract matching ROIs from each of the four views using an ipsilateral matching and bilateral registration scheme to ensure masses are appropriately matched. Next, the handcrafted radiomics features and VGG16 model-generated features are extracted from each ROI resulting in eight feature vectors. Then, after reducing feature dimensionality and quantifying the bilateral asymmetry, we test four fusion methods. Results show that multi-view CAD systems significantly outperform single-view systems (AUC = 0.876ยฑ0.031 vs AUC = 0.817ยฑ0.026 for CC view and 0.792ยฑ0.026 for MLO view, p<0.001). The study demonstrates that the shift from single-view CAD to four-view CAD and the inclusion of both deep transfer learning and radiomics features increases the feature representation of the mass thus improves CAD performance in distinguishing between malignant and benign breast lesions. In the third study, we build upon the first and second studies and investigate the effects of pseudo color image generation in classifying suspicious mammography detected breast lesions as malignant or benign using deep transfer learning in a multi-view CAD scheme. Seven pseudo color image sets are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass image. Using the multi-view CAD framework developed in the previous study, we observe that the two pseudo-color sets created using a segmented mass in one of the three image channels performed significantly better than all other pseudo-color sets (AUC=0.882, p<0.05 for all comparisons and AUC=0.889, p<0.05 for all comparisons). The results of this study support our hypothesis that pseudo color images generated with a segmented mass optimize the mammogram image feature representation by providing increased complementary information to the CADx scheme which results in an increase in the performance in classifying suspicious mammography detected breast lesions as malignant or benign. In summary, each of the studies presented in this dissertation aim to increase the accuracy of a CAD system in classifying suspicious mammography detected masses. Each of these studies takes a novel approach to increase the feature representation of the mass that needs to be classified. The results of each study demonstrate the potential utility of these CAD schemes as an aid to radiologists in the clinical workflow

    The Era of Radiogenomics in Precision Medicine: An Emerging Approach to Support Diagnosis, Treatment Decisions, and Prognostication in Oncology

    Get PDF
    With the rapid development of new technologies, including artificial intelligence and genome sequencing, radiogenomics has emerged as a state-of-the-art science in the field of individualized medicine. Radiogenomics combines a large volume of quantitative data extracted from medical images with individual genomic phenotypes and constructs a prediction model through deep learning to stratify patients, guide therapeutic strategies, and evaluate clinical outcomes. Recent studies of various types of tumors demonstrate the predictive value of radiogenomics. And some of the issues in the radiogenomic analysis and the solutions from prior works are presented. Although the workflow criteria and international agreed guidelines for statistical methods need to be confirmed, radiogenomics represents a repeatable and cost-effective approach for the detection of continuous changes and is a promising surrogate for invasive interventions. Therefore, radiogenomics could facilitate computer-aided diagnosis, treatment, and prediction of the prognosis in patients with tumors in the routine clinical setting. Here, we summarize the integrated process of radiogenomics and introduce the crucial strategies and statistical algorithms involved in current studies
    corecore