169 research outputs found

    Towards Automated Semantic Segmentation in Mammography Images

    Full text link
    Mammography images are widely used to detect non-palpable breast lesions or nodules, preventing cancer and providing the opportunity to plan interventions when necessary. The identification of some structures of interest is essential to make a diagnosis and evaluate image adequacy. Thus, computer-aided detection systems can be helpful in assisting medical interpretation by automatically segmenting these landmark structures. In this paper, we propose a deep learning-based framework for the segmentation of the nipple, the pectoral muscle, the fibroglandular tissue, and the fatty tissue on standard-view mammography images. We introduce a large private segmentation dataset and extensive experiments considering different deep-learning model architectures. Our experiments demonstrate accurate segmentation performance on variate and challenging cases, showing that this framework can be integrated into clinical practice.Comment: 6 page

    PeMNet for Pectoral Muscle Segmentation

    Get PDF
    X.Y. holds a CSC scholarship with the University of Leicester. The authors declare that there is no conflict of interest. This paper is partially supported by Royal Society International Exchanges Cost Share Award, UK (RP202G0230); Medical Research Council Confidence in Concept Award, UK (MC_PC_17171); Hope Foundation for Cancer Research, UK (RM60G0680); Sino-UK Industrial Fund, UK (RP202G0289); Global Challenges Research Fund (GCRF), UK (P202PF11); British Heart Foundation Accelerator Award, UK (AA/18/3/34220); Guangxi Key Laboratory of Trusted Software (kx201901); MCIN/AEI/10.13039/501100011033/ and FEDER Una manera de hacer Europa under the RTI2018-098913-B100 project, by the Consejeria de Economia, Innovacion, Ciencia y Empleo (Junta de Andalucia) and FEDER under CV20-45250, A-TIC-080-UGR18, B-TIC-586-UGR20 and P20-00525 projects.As an important imaging modality, mammography is considered to be the global gold standard for early detection of breast cancer. Computer-Aided (CAD) systems have played a crucial role in facilitating quicker diagnostic procedures, which otherwise could take weeks if only radiologists were involved. In some of these CAD systems, breast pectoral segmentation is required for breast region partition from breast pectoral muscle for specific analysis tasks. Therefore, accurate and efficient breast pectoral muscle segmentation frameworks are in high demand. Here, we proposed a novel deep learning framework, which we code-named PeMNet, for breast pectoral muscle segmentation in mammography images. In the proposed PeMNet, we integrated a novel attention module called the Global Channel Attention Module (GCAM), which can effectively improve the segmentation performance of Deeplabv3+ using minimal parameter overheads. In GCAM, channel attention maps (CAMs) are first extracted by concatenating feature maps after paralleled global average pooling and global maximum pooling operation. CAMs are then refined and scaled up by multi-layer perceptron (MLP) for elementwise multiplication with CAMs in next feature level. By iteratively repeating this procedure, the global CAMs (GCAMs) are then formed and multiplied elementwise with final feature maps to lead to final segmentation. By doing so, CAMs in early stages of a deep convolution network can be effectively passed on to later stages of the network and therefore leads to better information usage. The experiments on a merged dataset derived from two datasets, INbreast and OPTIMAM, showed that PeMNet greatly outperformed state-of-the-art methods by achieving an IoU of 97.46%, global pixel accuracy of 99.48%, Dice similarity coefficient of 96.30%, and Jaccard of 93.33%, respectively.CSCRoyal Society International Exchanges Cost Share Award, UK RP202G0230Medical Research Council Confidence in Concept Award, UK MC_PC_17171Hope Foundation for Cancer Research, UK RM60G0680Sino-UK Industrial Fund, UK RP202G0289Global Challenges Research Fund (GCRF), UK P202PF11British Heart Foundation Accelerator Award, UK AA/18/3/34220Guangxi Key Laboratory of Trusted Software kx201901FEDER Una manera de hacer Europa RTI2018-098913-B100Junta de AndaluciaEuropean Commission CV20-45250 A-TIC-080-UGR18 B-TIC-586-UGR20 P20-00525MCIN/AEI/10.13039/501100011033

    A deep learning system to obtain the optimal parameters for a threshold-based breast and dense tissue segmentation

    Full text link
    [EN] Background and Objective: Breast cancer is the most frequent cancer in women. The Spanish healthcare network established population-based screening programs in all Autonomous Communities, where mammograms of asymptomatic women are taken with early diagnosis purposes. Breast density assessed from digital mammograms is a biomarker known to be related to a higher risk to develop breast cancer. It is thus crucial to provide a reliable method to measure breast density from mammograms. Furthermore the complete automation of this segmentation process is becoming fundamental as the amount of mammograms increases every day. Important challenges are related with the differences in images from different devices and the lack of an objective gold standard. This paper presents a fully automated framework based on deep learning to estimate the breast density. The framework covers breast detection, pectoral muscle exclusion, and fibroglandular tissue segmentation. Methods: A multi-center study, composed of 1785 women whose "for presentation" mammograms were segmented by two experienced radiologists. A total of 4992 of the 6680 mammograms were used as training corpus and the remaining (1688) formed the test corpus. This paper presents a histogram normalization step that smoothed the difference between acquisition, a regression architecture that learned segmentation parameters as intrinsic image features and a loss function based on the DICE score. Results: The results obtained indicate that the level of concordance (DICE score) reached by the two radiologists (0.77) was also achieved by the automated framework when it was compared to the closest breast segmentation from the radiologists. For the acquired with the highest quality device, the DICE score per acquisition device reached 0.84, while the concordance between radiologists was 0.76. Conclusions: An automatic breast density estimator based on deep learning exhibits similar performance when compared with two experienced radiologists. It suggests that this system could be used to support radiologists to ease its work.This work was partially funded by Generalitat Valenciana through I+D IVACE (Valencian Institute of Business Competitiviness) and GVA (European Regional Development Fund) supports under the project IMAMCN/2019/1, and by Carlos III Institute of Health under the project DTS15/00080.Perez-Benito, FJ.; Signol, F.; Perez-Cortes, J.; Fuster Bagetto, A.; Pollan, M.; Pérez-Gómez, B.; Salas-Trejo, D.... (2020). A deep learning system to obtain the optimal parameters for a threshold-based breast and dense tissue segmentation. Computer Methods and Programs in Biomedicine. 195:123-132. https://doi.org/10.1016/j.cmpb.2020.105668S123132195Kuhl, C. K. (2015). The Changing World of Breast Cancer. Investigative Radiology, 50(9), 615-628. doi:10.1097/rli.0000000000000166Boyd, N. F., Rommens, J. M., Vogt, K., Lee, V., Hopper, J. L., Yaffe, M. J., & Paterson, A. D. (2005). Mammographic breast density as an intermediate phenotype for breast cancer. The Lancet Oncology, 6(10), 798-808. doi:10.1016/s1470-2045(05)70390-9Assi, V., Warwick, J., Cuzick, J., & Duffy, S. W. (2011). Clinical and epidemiological issues in mammographic density. Nature Reviews Clinical Oncology, 9(1), 33-40. doi:10.1038/nrclinonc.2011.173Oliver, A., Freixenet, J., Marti, R., Pont, J., Perez, E., Denton, E. R. E., & Zwiggelaar, R. (2008). A Novel Breast Tissue Density Classification Methodology. IEEE Transactions on Information Technology in Biomedicine, 12(1), 55-65. doi:10.1109/titb.2007.903514Pérez-Benito, F. J., Signol, F., Pérez-Cortés, J.-C., Pollán, M., Pérez-Gómez, B., Salas-Trejo, D., … LLobet, R. (2019). Global parenchymal texture features based on histograms of oriented gradients improve cancer development risk estimation from healthy breasts. Computer Methods and Programs in Biomedicine, 177, 123-132. doi:10.1016/j.cmpb.2019.05.022Ciatto, S., Houssami, N., Apruzzese, A., Bassetti, E., Brancato, B., Carozzi, F., … Scorsolini, A. (2005). Categorizing breast mammographic density: intra- and interobserver reproducibility of BI-RADS density categories. The Breast, 14(4), 269-275. doi:10.1016/j.breast.2004.12.004Skaane, P. (2009). Studies comparing screen-film mammography and full-field digital mammography in breast cancer screening: Updated review. Acta Radiologica, 50(1), 3-14. doi:10.1080/02841850802563269Van der Waal, D., den Heeten, G. J., Pijnappel, R. M., Schuur, K. H., Timmers, J. M. H., Verbeek, A. L. M., & Broeders, M. J. M. (2015). Comparing Visually Assessed BI-RADS Breast Density and Automated Volumetric Breast Density Software: A Cross-Sectional Study in a Breast Cancer Screening Setting. PLOS ONE, 10(9), e0136667. doi:10.1371/journal.pone.0136667Kim, S. H., Lee, E. H., Jun, J. K., Kim, Y. M., Chang, Y.-W., … Lee, J. H. (2019). Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets. Korean Journal of Radiology, 20(2), 218. doi:10.3348/kjr.2018.0193Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2017). Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236-1246. doi:10.1093/bib/bbx044LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi:10.1038/nature14539Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., … Kingsbury, B. (2012). Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Processing Magazine, 29(6), 82-97. doi:10.1109/msp.2012.2205597Wang, J., Chen, Y., Hao, S., Peng, X., & Hu, L. (2019). Deep learning for sensor-based activity recognition: A survey. Pattern Recognition Letters, 119, 3-11. doi:10.1016/j.patrec.2018.02.010Helmstaedter, M., Briggman, K. L., Turaga, S. C., Jain, V., Seung, H. S., & Denk, W. (2013). Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature, 500(7461), 168-174. doi:10.1038/nature12346Lee, K., Turner, N., Macrina, T., Wu, J., Lu, R., & Seung, H. S. (2019). Convolutional nets for reconstructing neural circuits from brain images acquired by serial section electron microscopy. Current Opinion in Neurobiology, 55, 188-198. doi:10.1016/j.conb.2019.04.001Leung, M. K. K., Xiong, H. Y., Lee, L. J., & Frey, B. J. (2014). Deep learning of the tissue-regulated splicing code. Bioinformatics, 30(12), i121-i129. doi:10.1093/bioinformatics/btu277Zhou, J., Park, C. Y., Theesfeld, C. L., Wong, A. K., Yuan, Y., Scheckel, C., … Troyanskaya, O. G. (2019). Whole-genome deep-learning analysis identifies contribution of noncoding mutations to autism risk. Nature Genetics, 51(6), 973-980. doi:10.1038/s41588-019-0420-0Kallenberg, M., Petersen, K., Nielsen, M., Ng, A. Y., Diao, P., Igel, C., … Lillholm, M. (2016). Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring. IEEE Transactions on Medical Imaging, 35(5), 1322-1331. doi:10.1109/tmi.2016.2532122Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. LeCun, Overfeat: integrated recognition, localization and detection using convolutional networks, arXiv:1312.6229 (2013).Dice, L. R. (1945). Measures of the Amount of Ecologic Association Between Species. Ecology, 26(3), 297-302. doi:10.2307/1932409Pollán, M., Llobet, R., Miranda-García, J., Antón, J., Casals, M., Martínez, I., … Salas-Trejo, D. (2013). Validation of DM-Scan, a computer-assisted tool to assess mammographic density in full-field digital mammograms. SpringerPlus, 2(1). doi:10.1186/2193-1801-2-242Llobet, R., Pollán, M., Antón, J., Miranda-García, J., Casals, M., Martínez, I., … Pérez-Cortés, J.-C. (2014). Semi-automated and fully automated mammographic density measurement and breast cancer risk prediction. Computer Methods and Programs in Biomedicine, 116(2), 105-115. doi:10.1016/j.cmpb.2014.01.021He, L., Ren, X., Gao, Q., Zhao, X., Yao, B., & Chao, Y. (2017). The connected-component labeling problem: A review of state-of-the-art algorithms. Pattern Recognition, 70, 25-43. doi:10.1016/j.patcog.2017.04.018Wu, K., Otoo, E., & Suzuki, K. (2008). Optimizing two-pass connected-component labeling algorithms. Pattern Analysis and Applications, 12(2), 117-135. doi:10.1007/s10044-008-0109-yShen, R., Yan, K., Xiao, F., Chang, J., Jiang, C., & Zhou, K. (2018). Automatic Pectoral Muscle Region Segmentation in Mammograms Using Genetic Algorithm and Morphological Selection. Journal of Digital Imaging, 31(5), 680-691. doi:10.1007/s10278-018-0068-9Yin, K., Yan, S., Song, C., & Zheng, B. (2018). A robust method for segmenting pectoral muscle in mediolateral oblique (MLO) mammograms. International Journal of Computer Assisted Radiology and Surgery, 14(2), 237-248. doi:10.1007/s11548-018-1867-7James, J. . (2004). The current status of digital mammography. Clinical Radiology, 59(1), 1-10. doi:10.1016/j.crad.2003.08.011Sáez, C., Robles, M., & García-Gómez, J. M. (2016). Stability metrics for multi-source biomedical data based on simplicial projections from probability distribution distances. Statistical Methods in Medical Research, 26(1), 312-336. doi:10.1177/0962280214545122Jain, A. K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31(8), 651-666. doi:10.1016/j.patrec.2009.09.011Lee, J., & Nishikawa, R. M. (2018). Automated mammographic breast density estimation using a fully convolutional network. Medical Physics, 45(3), 1178-1190. doi:10.1002/mp.12763D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, arXiv:1412.6980 (2014).Lehman, C. D., Yala, A., Schuster, T., Dontchos, B., Bahl, M., Swanson, K., & Barzilay, R. (2019). Mammographic Breast Density Assessment Using Deep Learning: Clinical Implementation. Radiology, 290(1), 52-58. doi:10.1148/radiol.2018180694Bengio, Y., Courville, A., & Vincent, P. (2013). Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828. doi:10.1109/tpami.2013.50Wu, G., Kim, M., Wang, Q., Munsell, B. C., & Shen, D. (2016). Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning. IEEE Transactions on Biomedical Engineering, 63(7), 1505-1516. doi:10.1109/tbme.2015.2496253T.P. Matthews, S. Singh, B. Mombourquette, J. Su, M.P. Shah, S. Pedemonte, A. Long, D. Maffit, J. Gurney, R.M. Hoil, et al., A multi-site study of a breast density deep learning model for full-field digital mammography and digital breast tomosynthesis exams, arXiv:2001.08383 (2020)

    Comparison between two packages for pectoral muscle removal on mammographic images

    Get PDF
    Background: Pectoral muscle removal is a fundamental preliminary step in computer-aided diagnosis systems for full-field digital mammography (FFDM). Currently, two open-source publicly available packages (LIBRA and OpenBreast) provide algorithms for pectoral muscle removal within Matlab environment. Purpose: To compare performance of the two packages on a single database of FFDM images. Methods: Only mediolateral oblique (MLO) FFDM was considered because of large presence of pectoral muscle on this type of projection. For obtaining ground truth, pectoral muscle has been manually segmented by two radiologists in consensus. Both LIBRA’s and OpenBreast’s removal performance with respect to ground truth were compared using Dice similarity coefficient and Cohen-kappa reliability coefficient; Wilcoxon signed-rank test has been used for assessing differences in performances; Kruskal–Wallis test has been used to verify possible dependence of the performance from the breast density or image laterality. Results: FFDMs from 168 consecutive women at our institution have been included in the study. Both LIBRA’s Dice-index and Cohen-kappa were significantly higher than OpenBreast (Wilcoxon signed-rank test P < 0.05). No dependence on breast density or laterality has been found (Kruskal–Wallis test P > 0.05). Conclusion: Libra has a better performance than OpenBreast in pectoral muscle delineation so that, although our study has not a direct clinical application, these results are useful in the choice of packages for the development of complex systems for computer-aided breast evaluation

    Automated pectoral muscle identification on MLOâ view mammograms: Comparison of deep neural network to conventional computer vision

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149204/1/mp13451_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149204/2/mp13451.pd

    Breast pectoral muscle segmentation in mammograms using a modified holistically-nested edge detection network

    Get PDF
    This paper presents a method for automatic breast pectoral muscle segmentation in mediolateral oblique mammograms using a Convolutional Neural Network (CNN) inspired by the Holistically-nested Edge Detection (HED) network. Most of the existing methods in the literature are based on hand-crafted models such as straight-line, curve-based techniques or a combination of both. Unfortunately, such models are insufficient when dealing with complex shape variations of the pectoral muscle boundary and when the boundary is unclear due to overlapping breast tissue. To compensate for these issues, we propose a neural network framework that incorporates multi-scale and multi-level learning, capable of learning complex hierarchical features to resolve spatial ambiguity in estimating the pectoral muscle boundary. For this purpose, we modified the HED network architecture to specifically find ‘contour-like’ objects in mammograms. The proposed framework produced a probability map that can be used to estimate the initial pectoral muscle boundary. Subsequently, we process these maps by extracting morphological properties to find the actual pectoral muscle boundary. Finally, we developed two different post-processing steps to find the actual pectoral muscle boundary. Quantitative evaluation results show that the proposed method is comparable with alternative state-of-the-art methods producing on average values of 94.8 ± 8.5% and 97.5 ± 6.3% for the Jaccard and Dice similarity metrics, respectively, across four different databases

    Wavelet-Based Automatic Breast Segmentation for Mammograms

    Get PDF
    As part of a first of its kind analysis of longitudinal mammograms, there are thousands of mammograms that need to be analyzed computationally. As a pre- processing step, each mammogram needs to be converted into a binary (black or white) spatial representation in order to delineate breast tissue from the pectoral muscle and image background, which is called a mammographic mask. The current methodology for completing this task is for a lab member to manually trace the outline of the breast, which takes approximately three minutes per mammogram. Thus, reducing the time cost and human subjectivity when completing this task for all mammograms in a large dataset is extremely valuable. In this thesis, an automated breast segmentation algorithm was adapted from a multi-scale gradient-based edge detection approach called the 2D Wavelet Transform Modulus Maxima (WTMM) segmentation method. This automated masking algorithm incorporates the first-derivative Gaussian Wavelet Transform to identify potential edge detection contour lines called maxima chains. The candidate chains are then transformed into a binary mask, which is then compared with the original manual delineation through the use of the Sorenson-Dice Coefficient (DSC). The analysis of 556 grayscale mammograms with this developed methodology produced a median DSC of 0.988 and 0.973 for craniocaudal (CC) and mediolateral oblique (MLO) grayscale mammograms respectively. Based on these median DSCs, in which a perfect overlap score is 1, it can be concluded a wavelet-based automatic breast segmentation algorithm is able to quickly segment the pectoral muscle and produce accurate binary spatial representations of breast tissue in grayscale mammograms

    A Comparative Study on the Methods Used for the Detection of Breast Cancer

    Get PDF
    Among women in the world, the death caused by the Breast cancer has become the leading role. At an initial stage, the tumor in the breast is hard to detect. Manual attempt have proven to be time consuming and inefficient in many cases. Hence there is a need for efficient methods that diagnoses the cancerous cell without human involvement with high accuracy. Mammography is a special case of CT scan which adopts X-ray method with high resolution film. so that it can detect well the tumors in the breast. This paper describes the comparative study of the various data mining methods on the detection of the breast cancer by using image processing techniques
    corecore