158 research outputs found

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network.

    Get PDF
    Mammogram inspection in search of breast tumors is a tough assignment that radiologists must carry out frequently. Therefore, image analysis methods are needed for the detection and delineation of breast tumors, which portray crucial morphological information that will support reliable diagnosis. In this paper, we proposed a conditional Generative Adversarial Network (cGAN) devised to segment a breast tumor within a region of interest (ROI) in a mammogram. The generative network learns to recognize the tumor area and to create the binary mask that outlines it. In turn, the adversarial network learns to distinguish between real (ground truth) and synthetic segmentations, thus enforcing the generative network to create binary masks as realistic as possible. The cGAN works well even when the number of training samples are limited. As a consequence, the proposed method outperforms several state-of-the-art approaches. Our working hypothesis is corroborated by diverse segmentation experiments performed on INbreast and a private in-house dataset. The proposed segmentation model, working on an image crop containing the tumor as well as a significant surrounding area of healthy tissue (loose frame ROI), provides a high Dice coefficient and Intersection over Union (IoU) of 94% and 87%, respectively. In addition, a shape descriptor based on a Convolutional Neural Network (CNN) is proposed to classify the generated masks into four tumor shapes: irregular, lobular, oval and round. The proposed shape descriptor was trained on DDSM, since it provides shape ground truth (while the other two datasets does not), yielding an overall accuracy of 80%, which outperforms the current state-of-the-art

    Computer aided diagnosis system for breast cancer using deep learning.

    Get PDF
    The recent rise of big data technology surrounding the electronic systems and developed toolkits gave birth to new promises for Artificial Intelligence (AI). With the continuous use of data-centric systems and machines in our lives, such as social media, surveys, emails, reports, etc., there is no doubt that data has gained the center of attention by scientists and motivated them to provide more decision-making and operational support systems across multiple domains. With the recent breakthroughs in artificial intelligence, the use of machine learning and deep learning models have achieved remarkable advances in computer vision, ecommerce, cybersecurity, and healthcare. Particularly, numerous applications provided efficient solutions to assist radiologists and doctors for medical imaging analysis, which has remained the essence of the visual representation that is used to construct the final observation and diagnosis. Medical research in cancerology and oncology has been recently blended with the knowledge gained from computer engineering and data science experts. In this context, an automatic assistance or commonly known as Computer-aided Diagnosis (CAD) system has become a popular area of research and development in the last decades. As a result, the CAD systems have been developed using multidisciplinary knowledge and expertise and they have been used to analyze the patient information to assist clinicians and practitioners in their decision-making process. Treating and preventing cancer remains a crucial task that radiologists and oncologists face every day to detect and investigate abnormal tumors. Therefore, a CAD system could be developed to provide decision support for many applications in the cancer patient care processes, such as lesion detection, characterization, cancer staging, tumors assessment, recurrence, and prognosis prediction. Breast cancer has been considered one of the common types of cancers in females across the world. It was also considered the leading cause of mortality among women, and it has been increased drastically every year. Early detection and diagnosis of abnormalities in screened breasts has been acknowledged as the optimal solution to examine the risk of developing breast cancer and thus reduce the increasing mortality rate. Accordingly, this dissertation proposes a new state-of-the-art CAD system for breast cancer diagnosis that is based on deep learning technology and cutting-edge computer vision techniques. Mammography screening has been recognized as the most effective tool to early detect breast lesions for reducing the mortality rate. It helps reveal abnormalities in the breast such as Mass lesion, Architectural Distortion, Microcalcification. With the number of daily patients that were screened is continuously increasing, having a second reading tool or assistance system could leverage the process of breast cancer diagnosis. Mammograms could be obtained using different modalities such as X-ray scanner and Full-Field Digital mammography (FFDM) system. The quality of the mammograms, the characteristics of the breast (i.e., density, size) or/and the tumors (i.e., location, size, shape) could affect the final diagnosis. Therefore, radiologists could miss the lesions and consequently they could generate false detection and diagnosis. Therefore, this work was motivated to improve the reading of mammograms in order to increase the accuracy of the challenging tasks. The efforts presented in this work consists of new design and implementation of neural network models for a fully integrated CAD system dedicated to breast cancer diagnosis. The approach is designed to automatically detect and identify breast lesions from the entire mammograms at a first step using fusion models’ methodology. Then, the second step only focuses on the Mass lesions and thus the proposed system should segment the detected bounding boxes of the Mass lesions to mask their background. A new neural network architecture for mass segmentation was suggested that was integrated with a new data enhancement and augmentation technique. Finally, a third stage was conducted using a stacked ensemble of neural networks for classifying and diagnosing the pathology (i.e., malignant, or benign), the Breast Imaging Reporting and Data System (BI-RADS) assessment score (i.e., from 2 to 6), or/and the shape (i.e., round, oval, lobulated, irregular) of the segmented breast lesions. Another contribution was achieved by applying the first stage of the CAD system for a retrospective analysis and comparison of the model on Prior mammograms of a private dataset. The work was conducted by joining the learning of the detection and classification model with the image-to-image mapping between Prior and Current screening views. Each step presented in the CAD system was evaluated and tested on public and private datasets and consequently the results have been fairly compared with benchmark mammography datasets. The integrated framework for the CAD system was also tested for deployment and showcase. The performance of the CAD system for the detection and identification of breast masses reached an overall accuracy of 97%. The segmentation of breast masses was evaluated together with the previous stage and the approach achieved an overall performance of 92%. Finally, the classification and diagnosis step that defines the outcome of the CAD system reached an overall pathology classification accuracy of 96%, a BIRADS categorization accuracy of 93%, and a shape classification accuracy of 90%. Results given in this dissertation indicate that our suggested integrated framework might surpass the current deep learning approaches by using all the proposed automated steps. Limitations of the proposed work could occur on the long training time of the different methods which is due to the high computation of the developed neural networks that have a huge number of the trainable parameters. Future works can include new orientations of the methodologies by combining different mammography datasets and improving the long training of deep learning models. Moreover, motivations could upgrade the CAD system by using annotated datasets to integrate more breast cancer lesions such as Calcification and Architectural distortion. The proposed framework was first developed to help detect and identify suspicious breast lesions in X-ray mammograms. Next, the work focused only on Mass lesions and segment the detected ROIs to remove the tumor’s background and highlight the contours, the texture, and the shape of the lesions. Finally, the diagnostic decision was predicted to classify the pathology of the lesions and investigate other characteristics such as the tumors’ grading assessment and type of the shape. The dissertation presented a CAD system to assist doctors and experts to identify the risk of breast cancer presence. Overall, the proposed CAD method incorporates the advances of image processing, deep learning, and image-to-image translation for a biomedical application

    Analyzing the breast tissue in mammograms using deep learning

    Get PDF
    La densitat mamogràfica de la mama (MBD) reflecteix la quantitat d'àrea fibroglandular del teixit mamari que apareix blanca i brillant a les mamografies, comunament coneguda com a densitat percentual de la mama (PD%). El MBD és un factor de risc per al càncer de mama i un factor de risc per emmascarar tumors. Tot i això, l'estimació precisa de la DMO amb avaluació visual continua sent un repte a causa del contrast feble i de les variacions significatives en els teixits grassos de fons en les mamografies. A més, la interpretació correcta de les imatges de mamografia requereix experts mèdics altament capacitats: És difícil, laboriós, car i propens a errors. No obstant això, el teixit mamari dens pot dificultar la identificació del càncer de mama i associar-se amb un risc més gran de càncer de mama. Per exemple, s'ha informat que les dones amb una alta densitat mamària en comparació amb les dones amb una densitat mamària baixa tenen un risc de quatre a sis vegades més gran de desenvolupar la malaltia. La clau principal de la computació de densitat de mama i la classificació de densitat de mama és detectar correctament els teixits densos a les imatges mamogràfiques. S'han proposat molts mètodes per estimar la densitat mamària; no obstant això, la majoria no estan automatitzats. A més, s'han vist greument afectats per la baixa relació senyal-soroll i la variabilitat de la densitat en aparença i textura. Seria més útil tenir un sistema de diagnòstic assistit per ordinador (CAD) per ajudar el metge a analitzar-lo i diagnosticar-lo automàticament. El desenvolupament actual de mètodes daprenentatge profund ens motiva a millorar els sistemes actuals danàlisi de densitat mamària. L'enfocament principal de la present tesi és desenvolupar un sistema per automatitzar l'anàlisi de densitat de la mama ( tal com; Segmentació de densitat de mama (BDS), percentatge de densitat de mama (BDP) i classificació de densitat de mama (BDC) ), utilitzant tècniques d'aprenentatge profund i aplicant-la a les mamografies temporals després del tractament per analitzar els canvis de densitat de mama per trobar un pacient perillós i sospitós.La densidad mamográfica de la mama (MBD) refleja la cantidad de área fibroglandular del tejido mamario que aparece blanca y brillante en las mamografías, comúnmente conocida como densidad porcentual de la mama (PD%). El MBD es un factor de riesgo para el cáncer de mama y un factor de riesgo para enmascarar tumores. Sin embargo, la estimación precisa de la DMO con evaluación visual sigue siendo un reto debido al contraste débil y a las variaciones significativas en los tejidos grasos de fondo en las mamografías. Además, la interpretación correcta de las imágenes de mamografía requiere de expertos médicos altamente capacitados: Es difícil, laborioso, caro y propenso a errores. Sin embargo, el tejido mamario denso puede dificultar la identificación del cáncer de mama y asociarse con un mayor riesgo de cáncer de mama. Por ejemplo, se ha informado que las mujeres con una alta densidad mamaria en comparación con las mujeres con una densidad mamaria baja tienen un riesgo de cuatro a seis veces mayor de desarrollar la enfermedad. La clave principal de la computación de densidad de mama y la clasificación de densidad de mama es detectar correctamente los tejidos densos en las imágenes mamográficas. Se han propuesto muchos métodos para la estimación de la densidad mamaria; sin embargo, la mayoría de ellos no están automatizados. Además, se han visto gravemente afectados por la baja relación señal-ruido y la variabilidad de la densidad en apariencia y textura. Sería más útil disponer de un sistema de diagnóstico asistido por ordenador (CAD) para ayudar al médico a analizarlo y diagnosticarlo automáticamente. El desarrollo actual de métodos de aprendizaje profundo nos motiva a mejorar los sistemas actuales de análisis de densidad mamaria. El enfoque principal de la presente tesis es desarrollar un sistema para automatizar el análisis de densidad de la mama ( tal como; Segmentación de densidad de mama (BDS), porcentaje de densidad de mama (BDP) y clasificación de densidad de mama (BDC)), utilizando técnicas de aprendizaje profundo y aplicándola en las mamografías temporales después del tratamiento para analizar los cambios de densidad de mama para encontrar un paciente peligroso y sospechoso.Mammographic breast density (MBD) reflects the amount of fibroglandular breast tissue area that appears white and bright on mammograms, commonly referred to as breast percent density (PD%). MBD is a risk factor for breast cancer and a risk factor for masking tumors. However, accurate MBD estimation with visual assessment is still a challenge due to faint contrast and significant variations in background fatty tissues in mammograms. In addition, correctly interpreting mammogram images requires highly trained medical experts: it is difficult, time-consuming, expensive, and error-prone. Nevertheless, dense breast tissue can make it harder to identify breast cancer and be associated with an increased risk of breast cancer. For example, it has been reported that women with a high breast density compared to women with a low breast density have a four- to six-fold increased risk of developing the disease. The primary key of breast density computing and breast density classification is to detect the dense tissues in the mammographic images correctly. Many methods have been proposed for breast density estimation; however, most are not automated. Besides, they have been badly affected by low signal-to-noise ratio and variability of density in appearance and texture. It would be more helpful to have a computer-aided diagnosis (CAD) system to assist the doctor analyze and diagnosing it automatically. Current development in deep learning methods motivates us to improve current breast density analysis systems. The main focus of the present thesis is to develop a system for automating the breast density analysis ( such as; breast density segmentation(BDS), breast density percentage (BDP), and breast density classification ( BDC)), using deep learning techniques and applying it on the temporal mammograms after treatment for analyzing the breast density changes to find a risky and suspicious patient

    Deep-Learning-Based Computer- Aided Systems for Breast Cancer Imaging: A Critical Review

    Full text link
    [EN] This paper provides a critical review of the literature on deep learning applications in breast tumor diagnosis using ultrasound and mammography images. It also summarizes recent advances in computer-aided diagnosis/detection (CAD) systems, which make use of new deep learning methods to automatically recognize breast images and improve the accuracy of diagnoses made by radiologists. This review is based upon published literature in the past decade (January 2010-January 2020), where we obtained around 250 research articles, and after an eligibility process, 59 articles were presented in more detail. The main findings in the classification process revealed that new DL-CAD methods are useful and effective screening tools for breast cancer, thus reducing the need for manual feature extraction. The breast tumor research community can utilize this survey as a basis for their current and future studies.This project has been co-financed by the Spanish Government Grant PID2019-107790RB-C22, "Software development for a continuous PET crystal systems applied to breast cancer".Jiménez-Gaona, Y.; Rodríguez Álvarez, MJ.; Lakshminarayanan, V. (2020). Deep-Learning-Based Computer- Aided Systems for Breast Cancer Imaging: A Critical Review. Applied Sciences. 10(22):1-29. https://doi.org/10.3390/app10228298S1291022Jemal, A., Bray, F., Center, M. M., Ferlay, J., Ward, E., & Forman, D. (2011). Global cancer statistics. CA: A Cancer Journal for Clinicians, 61(2), 69-90. doi:10.3322/caac.20107Gao, F., Chia, K.-S., Ng, F.-C., Ng, E.-H., & Machin, D. (2002). Interval cancers following breast cancer screening in Singaporean women. International Journal of Cancer, 101(5), 475-479. doi:10.1002/ijc.10636Munir, K., Elahi, H., Ayub, A., Frezza, F., & Rizzi, A. (2019). Cancer Diagnosis Using Deep Learning: A Bibliographic Review. Cancers, 11(9), 1235. doi:10.3390/cancers11091235Nahid, A.-A., & Kong, Y. (2017). Involvement of Machine Learning for Breast Cancer Image Classification: A Survey. Computational and Mathematical Methods in Medicine, 2017, 1-29. doi:10.1155/2017/3781951Ramadan, S. Z. (2020). Methods Used in Computer-Aided Diagnosis for Breast Cancer Detection Using Mammograms: A Review. Journal of Healthcare Engineering, 2020, 1-21. doi:10.1155/2020/9162464CHAN, H.-P., DOI, K., VYBRONY, C. J., SCHMIDT, R. A., METZ, C. E., LAM, K. L., … MACMAHON, H. (1990). Improvement in Radiologists?? Detection of Clustered Microcalcifications on Mammograms. Investigative Radiology, 25(10), 1102-1110. doi:10.1097/00004424-199010000-00006Olsen, O., & Gøtzsche, P. C. (2001). Cochrane review on screening for breast cancer with mammography. The Lancet, 358(9290), 1340-1342. doi:10.1016/s0140-6736(01)06449-2Mann, R. M., Kuhl, C. K., Kinkel, K., & Boetes, C. (2008). Breast MRI: guidelines from the European Society of Breast Imaging. European Radiology, 18(7), 1307-1318. doi:10.1007/s00330-008-0863-7Jalalian, A., Mashohor, S. B. T., Mahmud, H. R., Saripan, M. I. B., Ramli, A. R. B., & Karasfi, B. (2013). Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review. Clinical Imaging, 37(3), 420-426. doi:10.1016/j.clinimag.2012.09.024Sarno, A., Mettivier, G., & Russo, P. (2015). Dedicated breast computed tomography: Basic aspects. Medical Physics, 42(6Part1), 2786-2804. doi:10.1118/1.4919441Njor, S., Nyström, L., Moss, S., Paci, E., Broeders, M., Segnan, N., & Lynge, E. (2012). Breast Cancer Mortality in Mammographic Screening in Europe: A Review of Incidence-Based Mortality Studies. Journal of Medical Screening, 19(1_suppl), 33-41. doi:10.1258/jms.2012.012080Morrell, S., Taylor, R., Roder, D., & Dobson, A. (2012). Mammography screening and breast cancer mortality in Australia: an aggregate cohort study. Journal of Medical Screening, 19(1), 26-34. doi:10.1258/jms.2012.011127Marmot, M. G., Altman, D. G., Cameron, D. A., Dewar, J. A., Thompson, S. G., & Wilcox, M. (2013). The benefits and harms of breast cancer screening: an independent review. British Journal of Cancer, 108(11), 2205-2240. doi:10.1038/bjc.2013.177Pisano, E. D., Gatsonis, C., Hendrick, E., Yaffe, M., Baum, J. K., Acharyya, S., … Rebner, M. (2005). Diagnostic Performance of Digital versus Film Mammography for Breast-Cancer Screening. New England Journal of Medicine, 353(17), 1773-1783. doi:10.1056/nejmoa052911Carney, P. A., Miglioretti, D. L., Yankaskas, B. C., Kerlikowske, K., Rosenberg, R., Rutter, C. M., … Ballard-Barbash, R. (2003). Individual and Combined Effects of Age, Breast Density, and Hormone Replacement Therapy Use on the Accuracy of Screening Mammography. Annals of Internal Medicine, 138(3), 168. doi:10.7326/0003-4819-138-3-200302040-00008Woodard, D. B., Gelfand, A. E., Barlow, W. E., & Elmore, J. G. (2007). Performance assessment for radiologists interpreting screening mammography. Statistics in Medicine, 26(7), 1532-1551. doi:10.1002/sim.2633Cole, E. B., Pisano, E. D., Kistner, E. O., Muller, K. E., Brown, M. E., Feig, S. A., … Braeuning, M. P. (2003). Diagnostic Accuracy of Digital Mammography in Patients with Dense Breasts Who Underwent Problem-solving Mammography: Effects of Image Processing and Lesion Type. Radiology, 226(1), 153-160. doi:10.1148/radiol.2261012024Boyd, N. F., Guo, H., Martin, L. J., Sun, L., Stone, J., Fishell, E., … Yaffe, M. J. (2007). Mammographic Density and the Risk and Detection of Breast Cancer. New England Journal of Medicine, 356(3), 227-236. doi:10.1056/nejmoa062790Bird, R. E., Wallace, T. W., & Yankaskas, B. C. (1992). Analysis of cancers missed at screening mammography. Radiology, 184(3), 613-617. doi:10.1148/radiology.184.3.1509041Kerlikowske, K. (2000). Performance of Screening Mammography among Women with and without a First-Degree Relative with Breast Cancer. Annals of Internal Medicine, 133(11), 855. doi:10.7326/0003-4819-133-11-200012050-00009Nunes, F. L. S., Schiabel, H., & Goes, C. E. (2006). Contrast Enhancement in Dense Breast Images to Aid Clustered Microcalcifications Detection. Journal of Digital Imaging, 20(1), 53-66. doi:10.1007/s10278-005-6976-5Dinnes, J., Moss, S., Melia, J., Blanks, R., Song, F., & Kleijnen, J. (2001). Effectiveness and cost-effectiveness of double reading of mammograms in breast cancer screening: findings of a systematic review. The Breast, 10(6), 455-463. doi:10.1054/brst.2001.0350Robinson, P. J. (1997). Radiology’s Achilles’ heel: error and variation in the interpretation of the Röntgen image. The British Journal of Radiology, 70(839), 1085-1098. doi:10.1259/bjr.70.839.9536897Rangayyan, R. M., Ayres, F. J., & Leo Desautels, J. E. (2007). A review of computer-aided diagnosis of breast cancer: Toward the detection of subtle signs. Journal of the Franklin Institute, 344(3-4), 312-348. doi:10.1016/j.jfranklin.2006.09.003Vyborny, C. J., Giger, M. L., & Nishikawa, R. M. (2000). COMPUTER-AIDED DETECTION AND DIAGNOSIS OF BREAST CANCER. Radiologic Clinics of North America, 38(4), 725-740. doi:10.1016/s0033-8389(05)70197-4Giger, M. L. (2018). Machine Learning in Medical Imaging. Journal of the American College of Radiology, 15(3), 512-520. doi:10.1016/j.jacr.2017.12.028Xu, Y., Wang, Y., Yuan, J., Cheng, Q., Wang, X., & Carson, P. L. (2019). Medical breast ultrasound image segmentation by machine learning. Ultrasonics, 91, 1-9. doi:10.1016/j.ultras.2018.07.006Shan, J., Alam, S. K., Garra, B., Zhang, Y., & Ahmed, T. (2016). Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods. Ultrasound in Medicine & Biology, 42(4), 980-988. doi:10.1016/j.ultrasmedbio.2015.11.016Zhang, Q., Xiao, Y., Dai, W., Suo, J., Wang, C., Shi, J., & Zheng, H. (2016). Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics, 72, 150-157. doi:10.1016/j.ultras.2016.08.004Cheng, J.-Z., Ni, D., Chou, Y.-H., Qin, J., Tiu, C.-M., Chang, Y.-C., … Chen, C.-M. (2016). Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Scientific Reports, 6(1). doi:10.1038/srep24454Shin, S. Y., Lee, S., Yun, I. D., Kim, S. M., & Lee, K. M. (2019). Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images. IEEE Transactions on Medical Imaging, 38(3), 762-774. doi:10.1109/tmi.2018.2872031Wang, J., Ding, H., Bidgoli, F. A., Zhou, B., Iribarren, C., Molloi, S., & Baldi, P. (2017). Detecting Cardiovascular Disease from Mammograms With Deep Learning. IEEE Transactions on Medical Imaging, 36(5), 1172-1181. doi:10.1109/tmi.2017.2655486Kooi, T., Litjens, G., van Ginneken, B., Gubern-Mérida, A., Sánchez, C. I., Mann, R., … Karssemeijer, N. (2017). Large scale deep learning for computer aided detection of mammographic lesions. Medical Image Analysis, 35, 303-312. doi:10.1016/j.media.2016.07.007Debelee, T. G., Schwenker, F., Ibenthal, A., & Yohannes, D. (2019). Survey of deep learning in breast cancer image analysis. Evolving Systems, 11(1), 143-163. doi:10.1007/s12530-019-09297-2Keen, J. D., Keen, J. M., & Keen, J. E. (2018). Utilization of Computer-Aided Detection for Digital Screening Mammography in the United States, 2008 to 2016. Journal of the American College of Radiology, 15(1), 44-48. doi:10.1016/j.jacr.2017.08.033Henriksen, E. L., Carlsen, J. F., Vejborg, I. M., Nielsen, M. B., & Lauridsen, C. A. (2018). The efficacy of using computer-aided detection (CAD) for detection of breast cancer in mammography screening: a systematic review. Acta Radiologica, 60(1), 13-18. doi:10.1177/0284185118770917Gao, Y., Geras, K. J., Lewin, A. A., & Moy, L. (2019). New Frontiers: An Update on Computer-Aided Diagnosis for Breast Imaging in the Age of Artificial Intelligence. American Journal of Roentgenology, 212(2), 300-307. doi:10.2214/ajr.18.20392Pacilè, S., Lopez, J., Chone, P., Bertinotti, T., Grouin, J. M., & Fillard, P. (2020). Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool. Radiology: Artificial Intelligence, 2(6), e190208. doi:10.1148/ryai.2020190208Huynh, B. Q., Li, H., & Giger, M. L. (2016). Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. Journal of Medical Imaging, 3(3), 034501. doi:10.1117/1.jmi.3.3.034501Yap, M. H., Pons, G., Marti, J., Ganau, S., Sentis, M., Zwiggelaar, R., … Marti, R. (2018). Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks. IEEE Journal of Biomedical and Health Informatics, 22(4), 1218-1226. doi:10.1109/jbhi.2017.2731873Moon, W. K., Lee, Y.-W., Ke, H.-H., Lee, S. H., Huang, C.-S., & Chang, R.-F. (2020). Computer‐aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks. Computer Methods and Programs in Biomedicine, 190, 105361. doi:10.1016/j.cmpb.2020.105361LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi:10.1038/nature14539Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2017). Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236-1246. doi:10.1093/bib/bbx044Shin, H.-C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., … Summers, R. M. (2016). Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on Medical Imaging, 35(5), 1285-1298. doi:10.1109/tmi.2016.2528162Lee, J.-G., Jun, S., Cho, Y.-W., Lee, H., Kim, G. B., Seo, J. B., & Kim, N. (2017). Deep Learning in Medical Imaging: General Overview. Korean Journal of Radiology, 18(4), 570. doi:10.3348/kjr.2017.18.4.570Suzuki, K. (2017). Overview of deep learning in medical imaging. Radiological Physics and Technology, 10(3), 257-273. doi:10.1007/s12194-017-0406-5Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2010). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. International Journal of Surgery, 8(5), 336-341. doi:10.1016/j.ijsu.2010.02.007Khan, K. S., Kunz, R., Kleijnen, J., & Antes, G. (2003). Five Steps to Conducting a Systematic Review. Journal of the Royal Society of Medicine, 96(3), 118-121. doi:10.1177/014107680309600304Han, S., Kang, H.-K., Jeong, J.-Y., Park, M.-H., Kim, W., Bang, W.-C., & Seong, Y.-K. (2017). A deep learning framework for supporting the classification of breast lesions in ultrasound images. Physics in Medicine & Biology, 62(19), 7714-7728. doi:10.1088/1361-6560/aa82ecMoreira, I. C., Amaral, I., Domingues, I., Cardoso, A., Cardoso, M. J., & Cardoso, J. S. (2012). INbreast. Academic Radiology, 19(2), 236-248. doi:10.1016/j.acra.2011.09.014Abdelhafiz, D., Yang, C., Ammar, R., & Nabavi, S. (2019). Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics, 20(S11). doi:10.1186/s12859-019-2823-4Byra, M., Jarosik, P., Szubert, A., Galperin, M., Ojeda-Fournier, H., Olson, L., … Andre, M. (2020). Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network. Biomedical Signal Processing and Control, 61, 102027. doi:10.1016/j.bspc.2020.102027Jiao, Z., Gao, X., Wang, Y., & Li, J. (2016). A deep feature based framework for breast masses classification. Neurocomputing, 197, 221-231. doi:10.1016/j.neucom.2016.02.060Arevalo, J., González, F. A., Ramos-Pollán, R., Oliveira, J. L., & Guevara Lopez, M. A. (2016). Representation learning for mammography mass lesion classification with convolutional neural networks. Computer Methods and Programs in Biomedicine, 127, 248-257. doi:10.1016/j.cmpb.2015.12.014Peng, W., Mayorga, R. V., & Hussein, E. M. A. (2016). An automated confirmatory system for analysis of mammograms. Computer Methods and Programs in Biomedicine, 125, 134-144. doi:10.1016/j.cmpb.2015.09.019Al-Dhabyani, W., Gomaa, M., Khaled, H., & Fahmy, A. (2020). Dataset of breast ultrasound images. Data in Brief, 28, 104863. doi:10.1016/j.dib.2019.104863Piotrzkowska-Wróblewska, H., Dobruch-Sobczak, K., Byra, M., & Nowicki, A. (2017). Open access database of raw ultrasonic signals acquired from malignant and benign breast lesions. Medical Physics, 44(11), 6105-6109. doi:10.1002/mp.12538Fujita, H. (2020). AI-based computer-aided diagnosis (AI-CAD): the latest review to read first. Radiological Physics and Technology, 13(1), 6-19. doi:10.1007/s12194-019-00552-4Sengupta, S., Singh, A., Leopold, H. A., Gulati, T., & Lakshminarayanan, V. (2020). Ophthalmic diagnosis using deep learning with fundus images – A critical review. Artificial Intelligence in Medicine, 102, 101758. doi:10.1016/j.artmed.2019.101758Ganesan, K., Acharya, U. R., Chua, K. C., Min, L. C., & Abraham, K. T. (2013). Pectoral muscle segmentation: A review. Computer Methods and Programs in Biomedicine, 110(1), 48-57. doi:10.1016/j.cmpb.2012.10.020Huang, Q., Luo, Y., & Zhang, Q. (2017). Breast ultrasound image segmentation: a survey. International Journal of Computer Assisted Radiology and Surgery, 12(3), 493-507. doi:10.1007/s11548-016-1513-1Noble, J. A., & Boukerroui, D. (2006). Ultrasound image segmentation: a survey. IEEE Transactions on Medical Imaging, 25(8), 987-1010. doi:10.1109/tmi.2006.877092Kallergi, M., Woods, K., Clarke, L. P., Qian, W., & Clark, R. A. (1992). Image segmentation in digital mammography: Comparison of local thresholding and region growing algorithms. Computerized Medical Imaging and Graphics, 16(5), 323-331. doi:10.1016/0895-6111(92)90145-yTsantis, S., Dimitropoulos, N., Cavouras, D., & Nikiforidis, G. (2006). A hybrid multi-scale model for thyroid nodule boundary detection on ultrasound images. Computer Methods and Programs in Biomedicine, 84(2-3), 86-98. doi:10.1016/j.cmpb.2006.09.006Ilesanmi, A. E., Idowu, O. P., & Makhanov, S. S. (2020). Multiscale superpixel method for segmentation of breast ultrasound. Computers in Biology and Medicine, 125, 103879. doi:10.1016/j.compbiomed.2020.103879Chen, D.-R., Chang, R.-F., Kuo, W.-J., Chen, M.-C., & Huang, Y. .-L. (2002). Diagnosis of breast tumors with sonographic texture analysis using wavelet transform and neural networks. Ultrasound in Medicine & Biology, 28(10), 1301-1310. doi:10.1016/s0301-5629(02)00620-8Cheng, H. D., Shan, J., Ju, W., Guo, Y., & Zhang, L. (2010). Automated breast cancer detection and classification using ultrasound images: A survey. Pattern Recognition, 43(1), 299-317. doi:10.1016/j.patcog.2009.05.012Chan, H.-P., Wei, D., Helvie, M. A., Sahiner, B., Adler, D. D., Goodsitt, M. M., & Petrick, N. (1995). Computer-aided classification of mammographic masses and normal tissue: linear discriminant analysis in texture feature space. Physics in Medicine and Biology, 40(5), 857-876. doi:10.1088/0031-9155/40/5/010Tanaka, T., Torii, S., Kabuta, I., Shimizu, K., & Tanaka, M. (2007). Pattern Classification of Nevus with Texture Analysis. IEEJ Transactions on Electrical and Electronic Engineering, 3(1), 143-150. doi:10.1002/tee.20246Singh, B., Jain, V. K., & Singh, S. (2014). Mammogram Mass Classification Using Support Vector Machine with Texture, Shape Features and Hierarchical Centroid Method. Journal of Medical Imaging and Health Informatics, 4(5), 687-696. doi:10.1166/jmihi.2014.1312Pal, N. R., Bhowmick, B., Patel, S. K., Pal, S., & Das, J. (2008). A multi-stage neural network aided system for detection of microcalcifications in digitized mammograms. Neurocomputing, 71(13-15), 2625-2634. doi:10.1016/j.neucom.2007.06.015Ayer, T., Chen, Q., & Burnside, E. S. (2013). Artificial Neural Networks in Mammography Interpretation and Diagnostic Decision Making. Computational and Mathematical Methods in Medicine, 2013, 1-10. doi:10.1155/2013/832509Sumbaly, R., Vishnusri, N., & Jeyalatha, S. (2014). Diagnosis of Breast Cancer using Decision Tree Data Mining Technique. International Journal of Computer Applications, 98(10), 16-24. doi:10.5120/17219-7456Landwehr, N., Hall, M., & Frank, E. (2005). Logistic Model Trees. Machine Learning, 59(1-2), 161-205. doi:10.1007/s10994-005-0466-3Abdel-Zaher, A. M., & Eldeib, A. M. (2016). Breast cancer classification using deep belief networks. Expert Systems with Applications, 46, 139-144. doi:10.1016/j.eswa.2015.10.015Nishikawa, R. M., Giger, M. L., Doi, K., Metz, C. E., Yin, F.-F., Vyborny, C. J., & Schmidt, R. A. (1994). Effect of case selection on the performance of computer-aided detection schemes. Medical Physics, 21(2), 265-269. doi:10.1118/1.597287Guo, R., Lu, G., Qin, B., & Fei, B. (2018). Ultrasound Imaging Technologies for Breast Cancer Detection and Management: A Review. Ultrasound in Medicine & Biology, 44(1), 37-70. doi:10.1016/j.ultrasmedbio.2017.09.012Kang, C.-C., Wang, W.-J., & Kang, C.-H. (2012). Image segmentation with complicated background by using seeded region growing. AEU - International Journal of Electronics and Communications, 66(9), 767-771. doi:10.1016/j.aeue.2012.01.011Prabusankarlal, K. M., Thirumoorthy, P., & Manavalan, R. (2014). Computer Aided Breast Cancer Diagnosis Techniques in Ultrasound: A Survey. Journal of Medical Imaging and Health Informatics, 4(3), 331-349. doi:10.1166/jmihi.2014.1269Abdallah, Y. M., Elgak, S., Zain, H., Rafiq, M., A. Ebaid, E., & A. Elnaema, A. (2018). Breast cancer detection using image enhancement and segmentation algorithms. Biomedical Research, 29(20). doi:10.4066/biomedicalresearch.29-18-1106K.U, S., & S, G. R. (2016). Objective Quality Assessment of Image Enhancement Methods in Digital Mammography - A Comparative Study. Signal & Image Processing : An International Journal, 7(4), 01-13. doi:10.5121/sipij.2016.7401Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., … Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355-368. doi:10.1016/s0734-189x(87)80186-xPisano, E. D., Zong, S., Hemminger, B. M., DeLuca, M., Johnston, R. E., Muller, K., … Pizer, S. M. (1998). Contrast Limited Adaptive Histogram Equalization image processing to improve the detection of simulated spiculations in dense mammograms. Journal of Digital Imaging, 11(4), 193-200. doi:10.1007/bf03178082Wan, J., Yin, H., Chong, A.-X., & Liu, Z.-H. (2020). Progressive residual networks for image super-resolution. Applied Intelligence, 50(5), 1620-1632. doi:10.1007/s10489-019-01548-8Umehara, K., Ota, J., & Ishida, T. (2017). Super-Resolution Imaging of Mammograms Based on the Super-Resolution Convolutional Neural Network. Open Journal of Medical Imaging, 07(04), 180-195. doi:10.4236/ojmi.2017.74018Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image Super-Resolution Using Deep Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 295-307. doi:10.1109/tpami.2015.2439281Jiang, Y., & Li, J. (2020). Generative Adversarial Network for Image Super-Resolution Combining Texture Loss. Applied Sciences, 10(5), 1729. doi:10.3390/app10051729Schultz, R. R., & Stevenson, R. L. (1994). A Bayesian approach to image expansion for improved definition. IEEE Transactions on Image Processing, 3(3), 233-242. doi:10.1109/83.287017Lei Zhang, & Xiaolin Wu. (2006). An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Transactions on Image Processing, 15(8), 2226-2238. doi:10.1109/tip.2006.877407Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on Image Data Augmentation for Deep Learning. Journal of Big Data, 6(1). doi:10.1186/s40537-019-0197-0Weiss, K., Khoshgoftaar, T. M., & Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3(1). doi:10.1186/s40537-016-0043-6Ling Shao, Fan Zhu, & Xuelong Li. (2015). Transfer Learning for Visual Categorization: A Survey. IEEE Transactions on Neural Networks and Learning Syste

    Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions

    Full text link
    Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in the deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. In this paper, we provide an extensive survey of deep learning-based breast cancer imaging research, covering studies on mammogram, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods, publicly available datasets, and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are described in detail. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.Comment: Survey, 41 page

    Breast Cancer Classification using Deep Learned Features Boosted with Handcrafted Features

    Full text link
    Breast cancer is one of the leading causes of death among women across the globe. It is difficult to treat if detected at advanced stages, however, early detection can significantly increase chances of survival and improves lives of millions of women. Given the widespread prevalence of breast cancer, it is of utmost importance for the research community to come up with the framework for early detection, classification and diagnosis. Artificial intelligence research community in coordination with medical practitioners are developing such frameworks to automate the task of detection. With the surge in research activities coupled with availability of large datasets and enhanced computational powers, it expected that AI framework results will help even more clinicians in making correct predictions. In this article, a novel framework for classification of breast cancer using mammograms is proposed. The proposed framework combines robust features extracted from novel Convolutional Neural Network (CNN) features with handcrafted features including HOG (Histogram of Oriented Gradients) and LBP (Local Binary Pattern). The obtained results on CBIS-DDSM dataset exceed state of the art

    Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions

    Full text link
    Medical Image Analysis is currently experiencing a paradigm shift due to Deep Learning. This technology has recently attracted so much interest of the Medical Imaging community that it led to a specialized conference in `Medical Imaging with Deep Learning' in the year 2018. This article surveys the recent developments in this direction, and provides a critical review of the related major aspects. We organize the reviewed literature according to the underlying Pattern Recognition tasks, and further sub-categorize it following a taxonomy based on human anatomy. This article does not assume prior knowledge of Deep Learning and makes a significant contribution in explaining the core Deep Learning concepts to the non-experts in the Medical community. Unique to this study is the Computer Vision/Machine Learning perspective taken on the advances of Deep Learning in Medical Imaging. This enables us to single out `lack of appropriately annotated large-scale datasets' as the core challenge (among other challenges) in this research direction. We draw on the insights from the sister research fields of Computer Vision, Pattern Recognition and Machine Learning etc.; where the techniques of dealing with such challenges have already matured, to provide promising directions for the Medical Imaging community to fully harness Deep Learning in the future

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Breast tumor segmentation in ultrasound images using contextual-information-aware deep adversarial learning framework.

    Get PDF
    Automatic tumor segmentation in breast ultrasound (BUS) images is still a challenging task because of many sources of uncertainty, such as speckle noise, very low signal-to-noise ratio, shadows that make the anatomical boundaries of tumors ambiguous, as well as the highly variable tumor sizes and shapes. This article proposes an efficient automated method for tumor segmentation in BUS images based on a contextual information-aware conditional generative adversarial learning framework. Specifically, we exploit several enhancements on a deep adversarial learning framework to capture both texture features and contextual dependencies in the BUS images that facilitate beating the challenges mentioned above. First, we adopt atrous convolution (AC) to capture spatial and scale context (i.e., position and size of tumors) to handle very different tumor sizes and shapes. Second, we propose the use of channel attention along with channel weighting (CAW) mechanisms to promote the tumor-relevant features (without extra supervision) and mitigate the effects of artifacts. Third, we propose to integrate the structural similarity index metric (SSIM) and L1-norm in the loss function of the adversarial learning framework to capture the local context information derived from the area surrounding the tumors. We used two BUS image datasets to assess the efficiency of the proposed model. The experimental results show that the proposed model achieves competitive results compared with state-of-the-art segmentation models in terms of Dice and IoU metrics. The source code of the proposed model is publicly available at https://github.com/vivek231/Breast-US-project
    corecore