19 research outputs found
Fibroglandular Tissue Segmentation in Breast MRI using Vision Transformers -- A multi-institutional evaluation
Accurate and automatic segmentation of fibroglandular tissue in breast MRI
screening is essential for the quantification of breast density and background
parenchymal enhancement. In this retrospective study, we developed and
evaluated a transformer-based neural network for breast segmentation (TraBS) in
multi-institutional MRI data, and compared its performance to the well
established convolutional neural network nnUNet. TraBS and nnUNet were trained
and tested on 200 internal and 40 external breast MRI examinations using manual
segmentations generated by experienced human readers. Segmentation performance
was assessed in terms of the Dice score and the average symmetric surface
distance. The Dice score for nnUNet was lower than for TraBS on the internal
testset (0.9090.069 versus 0.9160.067, P<0.001) and on the external
testset (0.8240.144 versus 0.8640.081, P=0.004). Moreover, the
average symmetric surface distance was higher (=worse) for nnUNet than for
TraBS on the internal (0.6572.856 versus 0.5482.195, P=0.001) and on
the external testset (0.7270.620 versus 0.5840.413, P=0.03). Our
study demonstrates that transformer-based networks improve the quality of
fibroglandular tissue segmentation in breast MRI compared to
convolutional-based models like nnUNet. These findings might help to enhance
the accuracy of breast density and parenchymal enhancement quantification in
breast MRI screening
์ ๋ฐฉ ์ดฌ์์ ์์ ์๋ฃ์ ๋ฅ๋ฌ๋ ์ ์ฉ์ ํตํ ์ ๋ฐฉ์ ์ํ๋ ํ๊ฐ : ์ ๋ฐฉ ์น๋ฐ๋ ์๋ ํ๊ฐ ๋ฐฉ๋ฒ ๊ธฐ๋ฐ
ํ์๋
ผ๋ฌธ(์์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :๋ณด๊ฑด๋ํ์ ๋ณด๊ฑดํ๊ณผ,2019. 8. ์ฑ์ฃผํ.Introduction : Mammographic density adjusted for age and body mass index (BMI) is the most predictive marker of breast cancer after familial causes and genetic markers. The aim of this study was to develop deep learning (DL) algorithm to assess mammographic density.
Methods : Total 2464 participants (834 cases and 1630 controls) were collected from Asan Medical Center and Samsung Medical Center, Korea. Cranio-caudal view mammographic images were obtained using full-field digital mammography system. Mammographic densities were measured using CUMULUS software. The resulting DL algorithm was tested on a held-out test set of 493 women. Agreement on DL and expert was assessed with correlation coefficient and weighted ฮบ statistics. Risk associations of DL measures were evaluated with area under curve (AUC) and odds per adjusted standard deviation (OPERA).
Results : The DL model showed very good agreement with expert for both percent density and dense area (r = 0.94 - 0.96 and ฮบ = 0.89 - 0.91). Risk associations of DL measures were comparable to manual measures of expert. DL measures adjusted for age and BMI showed strong risk associations with breast cancer (OPERA = 1.51 - 1.63 and AUC = 0.61 - 0.64).
Conclusions : DL model can be used to measure mammographic density which is a strong risk factor of breast cancer. This study showed the potential of DL algorithm as a mammogram-based risk prediction model in breast cancer screening test.์ ๋ฐฉ ๋ด ์ ๋ฐฉ ์ค์ง ์กฐ์ง์ ์์ ๋ฐ์ํ๋ ์ ๋ฐฉ ๋ฐ๋๋ ๋ง๋ชจ๊ทธ๋จ์์ ๋ํ๋๋ ๋ฐ์ ๋ถ๋ถ์ผ๋ก ์ ์๋๋ฉฐ, ์ ๋ฐฉ์์ ๊ฐ๋ ฅํ ์ํ์ธ์๋ก ๋๋ฆฌ ์๋ ค์ ธ ์๋ค. ํ์ง๋ง ์ ๋ฐฉ ๋ฐ๋๋ ์ธก์ ํ๋๋ฐ ์๊ฐ๊ณผ ๋น์ฉ์ด ๋ง์ด ๋ ๋ค๋ ๋จ์ ์ผ๋ก ์ธํด ์ ๋ฐฉ์ ๊ฒ์ง ๊ณผ์ ์์ ์ ํ์ ์ผ๋ก ์ฌ์ฉ๋ผ ์๋ค. ๋ณธ ์ฐ๊ตฌ์ ๋ชฉ์ ์ ์ ๋ฐฉ์ ๊ฒ์ง์์ ์ ๋ฐฉ์ ์์ธก ๋ชจํ์ ํฌํจํด ํ์ฉํ ์ ์๋ ๋ฅ๋ฌ๋ ๊ธฐ๋ฐ ์ ๋ฐฉ ๋ฐ๋ ์ธก์ ์น๋ฅผ ๊ฐ๋ฐํ๋ ๊ฒ์ด๋ค.
๋ณธ ์ฐ๊ตฌ๋ ์์ฐ ๋ณ์๊ณผ ์ผ์ฑ ์์ธ๋ณ์์ ์ ๋ฐฉ์ ๊ฒ์ง ์๋ฃ๋ก๋ถํฐ ์์ง๋ ์ด 2464 ๋ช
์ ์ฐธ์ฌ์ (ํ์: 834 ๋ช
, ๋์กฐ๊ตฐ : 1630 ๋ช
) ๋ฅผ ๋์์ผ๋ก ์ํ๋์๋ค. ํ์์ ๊ฒฝ์ฐ ๋ณ๋ณ์ด ๋ฐ์ํ ์ ๋ฐฉ์ ๋ฐ๋์ชฝ ์ ๋ฐฉ, ๋์กฐ๊ตฐ์ ๊ฒฝ์ฐ ์์๋ก ๊ณ ๋ฅธ ์ ๋ฐฉ์ ๋์์ผ๋ก ์ ๋ฐฉ ๋ฐ๋ ์ธก์ ์ 5๋
์ด์์ ๊ฒฝ๋ ฅ์ ๊ฐ์ง ์ ๋ฌธ๊ฐ๊ฐ CUMULUS ํ๋ก๊ทธ๋จ์ ํ์ฉํ์ฌ ์ ๋ฐฉ ๋ฐ๋ (์น๋ฐ ์ ๋ฐฉ ๋ถ์, cm2 ๋ฐ ์น๋ฐ๋ ๋ฐฑ๋ถ์จ, %) ๋ฅผ ์ธก์ ํ์๋ค. ์ด ์ ๋ฌธ๊ฐ ์ธก์ ์น๋ฅผ ํ๋ จ ๋ฐ์ดํฐ๋ก ํ์ฌ ์์ ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง (Fully Convolutional Network) ๊ธฐ๋ฐ ๋ฅ๋ฌ๋ ๋ชจ๋ธ์ ๊ตฌ์ถํ์๊ณ , ์ด๋ฅผ ํ
์คํธ ๋ฐ์ดํฐ์ ๋ํด ์ ์ฉํด ์ ๋ฌธ๊ฐ ์ธก์ ์น์์ ์ผ์น๋ ๋ฐ ์ ๋ฐฉ์ ์์ธก๋ ฅ์ ํ๊ฐํ์๋ค.
๋ฅ๋ฌ๋ ๋ชจ๋ธ์ ์ ๋ฌธ๊ฐ์ ๋์ ์ผ์น๋ (r = 0.94 - 0.96, weighted ฮบ = 0.89 โ 0.91) ๋ฅผ ๋ณด์๋ค. ๋ํ ๋์ด์ BMI๋ฅผ ๋ณด์ ํ ๋ฅ๋ฌ๋ ๊ธฐ๋ฐ ์ธก์ ์น์ ์ ๋ฐฉ์ ์์ธก๋ ฅ์ ํ๊ฐํ ๊ฒฐ๊ณผ, ๋ฅ๋ฌ๋ ๋ชจ๋ธ์ด ์ ๋ฌธ๊ฐ์ ๋น์ทํ ์์ค์ ์์ธก๋ ฅ์ ๊ฐ๋๋ค๋ ๊ฒ์ ํ์ธํ์๋ค (์ ๋ฌธ๊ฐ, AUC = 0.62 โ 0.63, ๋ฅ๋ฌ๋ ๋ชจ๋ธ, AUC = 0.61 โ 0.64).
๋ณธ ์ฐ๊ตฌ๋ ๋ฅ๋ฌ๋์ด ํ์ฌ์ ๋
ธ๋ ์ง์ฝ์ ์ธ ์ ๋ฐฉ ๋ฐ๋ ์ธก์ ๋ฒ์ ๋ณด์ํ ์ ์๋ ๊ฐ๋ฅ์ฑ์ ๋ณด์ฌ์ฃผ์๋ค. ์ด๋ ๋น์ฉ-ํจ์จ์ ์ธ ๋ฐฉ๋ฒ์ผ๋ก ์ ๋ฐฉ ๋ฐ๋ ์ธก์ ์น๋ฅผ ์ ๋ฐฉ์ ์์ธก ๋ชจํ์ ํฌํจ์ํฌ ์ ์๋ ๊ธฐํ๋ฅผ ์ ๊ณตํ๋ค. ์ด๋ฌํ ๋ง๋ชจ๊ทธ๋จ ๊ธฐ๋ฐ ์ ๋ฐฉ์ ์ํ๋ ์์ธก ๋ชจํ์ด ์ ๋ฐฉ์ ๊ฒ์ง ๊ณผ์ ์ ์ ์ฉ๋๋ค๋ฉด ๋ณด๋ค ์ ๋ฐํ ์ ๋ฐฉ์ ์ํ๋ ํ๊ฐ๋ฅผ ํตํด ํจ๊ณผ์ ์ผ๋ก ์ ๋ฐฉ์ ๊ณ ์ํ๊ตฐ์ ์ ๋ณํ ์ ์์ผ๋ฉฐ, ๊ณ ์ํ๊ตฐ์ ๋ํ ๋ง์ถคํ ์๋ฐฉ ์ ๋ต์ด ์ ์ฉ๋๋ค๋ฉด ์ฅ๊ธฐ์ ์ผ๋ก ์ ๋ฐฉ์ ์กฐ๊ธฐ ๋ฐ๊ฒฌ ๋ฐ ์ฌ๋ง๋ฅ ๊ฐ์์ ๊ธฐ์ฌํ ์ ์์ ๊ฒ์ผ๋ก ๊ธฐ๋ํ๋ค.1 Introduction 1
2 Materials and Methods 3
2.1 Data collection 3
2.2 Measurement of mammographic density 4
2.3 Development of DL model 6
2.3.1 Establishing ground truth 6
2.3.2 Image preprocessing 6
2.3.3 Establishing DL model 6
2.3.4 Estimation of mammographic density 11
2.4 Statistical methods 14
2.4.1 Agreement statistics 14
2.4.2 Evaluation of risk association 15
3 Results 16
3.1 Characteristics of study participants 16
3.2 Agreement of DL model 17
3.3 Breast cancer risk profiles 21
4 Discussion 24
Bibliography 26
์ด๋ก 29Maste
3D Deep Learning on Medical Images: A Review
The rapid advancements in machine learning, graphics processing technologies
and availability of medical imaging data has led to a rapid increase in use of
deep learning models in the medical domain. This was exacerbated by the rapid
advancements in convolutional neural network (CNN) based architectures, which
were adopted by the medical imaging community to assist clinicians in disease
diagnosis. Since the grand success of AlexNet in 2012, CNNs have been
increasingly used in medical image analysis to improve the efficiency of human
clinicians. In recent years, three-dimensional (3D) CNNs have been employed for
analysis of medical images. In this paper, we trace the history of how the 3D
CNN was developed from its machine learning roots, give a brief mathematical
description of 3D CNN and the preprocessing steps required for medical images
before feeding them to 3D CNNs. We review the significant research in the field
of 3D medical imaging analysis using 3D CNNs (and its variants) in different
medical areas such as classification, segmentation, detection, and
localization. We conclude by discussing the challenges associated with the use
of 3D CNNs in the medical imaging domain (and the use of deep learning models,
in general) and possible future trends in the field.Comment: 13 pages, 4 figures, 2 table
Deep learning in medical imaging and radiation therapy
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
Advanced Computational Methods for Oncological Image Analysis
[Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with cliniciansโ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operationsโsuch as segmentation, co-registration, classification, and dimensionality reductionโand multi-omics data integration.
Developing Novel Computer Aided Diagnosis Schemes for Improved Classification of Mammography Detected Masses
Mammography imaging is a population-based breast cancer screening tool that has greatly aided in the decrease in breast cancer mortality over time. Although mammography is the most frequently employed breast imaging modality, its performance is often unsatisfactory with low sensitivity and high false positive rates. This is due to the fact that reading and interpreting mammography images remains difficult due to the heterogeneity of breast tumors and dense overlapping fibroglandular tissue. To help overcome these clinical challenges, researchers have made great efforts to develop computer-aided detection and/or diagnosis (CAD) schemes to provide radiologists with decision-making support tools. In this dissertation, I investigate several novel methods for improving the performance of a CAD system in distinguishing between malignant and benign masses.
The first study, we test the hypothesis that handcrafted radiomics features and deep learning features contain complementary information, therefore the fusion of these two types of features will increase the feature representation of each mass and improve the performance of CAD system in distinguishing malignant and benign masses. Regions of interest (ROI) surrounding suspicious masses are extracted and two types of features are computed. The first set consists of 40 radiomic features and the second set includes deep learning (DL) features computed from a pretrained VGG16 network. DL features are extracted from two pseudo color image sets, producing a total of three feature vectors after feature extraction, namely: handcrafted, DL-stacked, DL-pseudo. Linear support vector machines (SVM) are trained using each feature set alone and in combinations. Results show that the fusion CAD system significantly outperforms the systems using either feature type alone (AUC=0.756ยฑ0.042 p<0.05). This study demonstrates that both handcrafted and DL futures contain useful complementary information and that fusion of these two types of features increases the CAD classification performance.
In the second study, we expand upon our first study and develop a novel CAD framework that fuses information extracted from ipsilateral views of bilateral mammograms using both DL and radiomics feature extraction methods. Each case in this study is represented by four images which includes the craniocaudal (CC) and mediolateral oblique (MLO) view of left and right breast. First, we extract matching ROIs from each of the four views using an ipsilateral matching and bilateral registration scheme to ensure masses are appropriately matched. Next, the handcrafted radiomics features and VGG16 model-generated features are extracted from each ROI resulting in eight feature vectors. Then, after reducing feature dimensionality and quantifying the bilateral asymmetry, we test four fusion methods. Results show that multi-view CAD systems significantly outperform single-view systems (AUC = 0.876ยฑ0.031 vs AUC = 0.817ยฑ0.026 for CC view and 0.792ยฑ0.026 for MLO view, p<0.001). The study demonstrates that the shift from single-view CAD to four-view CAD and the inclusion of both deep transfer learning and radiomics features increases the feature representation of the mass thus improves CAD performance in distinguishing between malignant and benign breast lesions.
In the third study, we build upon the first and second studies and investigate the effects of pseudo color image generation in classifying suspicious mammography detected breast lesions as malignant or benign using deep transfer learning in a multi-view CAD scheme. Seven pseudo color image sets are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass image. Using the multi-view CAD framework developed in the previous study, we observe that the two pseudo-color sets created using a segmented mass in one of the three image channels performed significantly better than all other pseudo-color sets (AUC=0.882, p<0.05 for all comparisons and AUC=0.889, p<0.05 for all comparisons). The results of this study support our hypothesis that pseudo color images generated with a segmented mass optimize the mammogram image feature representation by providing increased complementary information to the CADx scheme which results in an increase in the performance in classifying suspicious mammography detected breast lesions as malignant or benign.
In summary, each of the studies presented in this dissertation aim to increase the accuracy of a CAD system in classifying suspicious mammography detected masses. Each of these studies takes a novel approach to increase the feature representation of the mass that needs to be classified. The results of each study demonstrate the potential utility of these CAD schemes as an aid to radiologists in the clinical workflow
Recommended from our members
Functional Magnetic Resonance Imaging of Breast Cancer
This thesis examines the use of magnetic resonance imaging (MRI) techniques in the detection of breast cancer and the prediction of pathological complete response (pCR) to neoadjuvant chemotherapy (NACT).
This thesis compares the diagnostic performance of diffusion-weighted imaging (DWI) models in the breast using a systematic review and meta-analysis. Advanced diffusion models have been proposed that may improve the performance of standard DWI using the apparent diffusion coefficient (ADC) to discriminate between malignant and benign breast lesions. Pooling the results from 73 studies, comparable diagnostic accuracy is shown using the ADC and parameters from the intra-voxel incoherent motion (IVIM) and diffusion tensor imaging (DTI) models. This work highlights a lack of standardisation in DWI protocols and methodology. Conventional acquisition techniques used in DWI often suffer from image artefacts and low spatial resolution. A multi-shot DWI technique, multiplexed sensitivity encoding (MUSE), can improve the image quality of DWI. A MUSE protocol has been optimised through a series of phantom experiments and validated in 20 patients. Comparing MUSE to conventional DWI, statistically significant improvements are shown in distortion and blurring metrics and qualitative image quality metrics such as lesion conspicuity and diagnostic confidence, increasing the clinical utility of DWI.
This thesis investigates the use of dynamic contrast-enhanced MRI (DCE-MRI) in the detection of breast cancer and the prediction of pCR. Abbreviated MRI (ABB-MRI) protocols have gained increasing attention for the detection of breast cancer, acquiring a shortened version of a full diagnostic protocol (FDP-MRI) in a fraction of the time, reducing the cost of the examination. The diagnostic performance of abbreviated and full diagnostic protocols is systematically compared using a meta-analysis. Pooling 13 studies, equivalent diagnostic accuracy is shown for ABB-MRI in cohorts enriched with cancers, and lower but not significantly different diagnostic performance is shown in screening cohorts.
Higher order imaging features derived from pre-treatment DCE-MRI could be used to predict pCR and inform decisions regarding targeted treatment, avoiding unnecessary toxicity. Using data from 152 patients undergoing NACT, radiomics features are extracted from baseline DCE-MRI and machine learning models trained to predict pCR with moderate accuracy. The stability of feature selection using logistic regression classification is demonstrated and a comparison of models trained using features from different time points in the dynamic series demonstrates that a full dynamic series enables the most accurate prediction of pCR.GE Healthcare funded PhD Studentshi
The Era of Radiogenomics in Precision Medicine: An Emerging Approach to Support Diagnosis, Treatment Decisions, and Prognostication in Oncology
With the rapid development of new technologies, including artificial intelligence and genome sequencing, radiogenomics has emerged as a state-of-the-art science in the field of individualized medicine. Radiogenomics combines a large volume of quantitative data extracted from medical images with individual genomic phenotypes and constructs a prediction model through deep learning to stratify patients, guide therapeutic strategies, and evaluate clinical outcomes. Recent studies of various types of tumors demonstrate the predictive value of radiogenomics. And some of the issues in the radiogenomic analysis and the solutions from prior works are presented. Although the workflow criteria and international agreed guidelines for statistical methods need to be confirmed, radiogenomics represents a repeatable and cost-effective approach for the detection of continuous changes and is a promising surrogate for invasive interventions. Therefore, radiogenomics could facilitate computer-aided diagnosis, treatment, and prediction of the prognosis in patients with tumors in the routine clinical setting. Here, we summarize the integrated process of radiogenomics and introduce the crucial strategies and statistical algorithms involved in current studies