11 research outputs found

    Differentiation of thyroid nodules on US using features learned and extracted from various convolutional neural networks

    Get PDF
    Thyroid nodules are a common clinical problem. Ultrasonography (US) is the main tool used to sensitively diagnose thyroid cancer. Although US is non-invasive and can accurately differentiate benign and malignant thyroid nodules, it is subjective and its results inevitably lack reproducibility. Therefore, to provide objective and reliable information for US assessment, we developed a CADx system that utilizes convolutional neural networks and the machine learning technique. The diagnostic performances of 6 radiologists and 3 representative results obtained from the proposed CADx system were compared and analyzed.ope

    A comparison between deep learning convolutional neural networks and radiologists in the differentiation of benign and malignant thyroid nodules on CT images

    Get PDF
    Introduction: We designed 5 convolutional neural network (CNN) models and ensemble models to differentiate malignant and benign thyroid nodules on CT, and compared the diagnostic performance of CNN models with that of radiologists. Material and methods: We retrospectively included CT images of 880 patients with 986 thyroid nodules confirmed by surgical pathology between July 2017 and December 2019. Two radiologists retrospectively diagnosed benign and malignant thyroid nodules on CT images in a test set. Five CNNs (ResNet50, DenseNet121, DenseNet169, SE-ResNeXt50, and Xception) were trained-validated and tested using 788 and 198 thyroid nodule CT images, respectively. Then, we selected the 3 models with the best diagnostic performance on the test set for the model ensemble. We then compared the diagnostic performance of 2 radiologists with 5 CNN models and the integrated model. Results: Of the 986 thyroid nodules, 541 were malignant, and 445 were benign. The area under the curves (AUCs) for diagnosing thyroid malignancy was 0.587–0.754 for 2 radiologists. The AUCs for diagnosing thyroid malignancy for the 5 CNN models and ensemble model was 0.901–0.947. There were significant differences in AUC between the radiologists’ models and the CNN models (p < 0.05). The ensemble model had the highest AUC value. Conclusions: Five CNN models and an ensemble model performed better than radiologists in distinguishing malignant thyroid nodules from benign nodules on CT. The diagnostic performance of the ensemble model improved and showed good potential.

    Artificial intelligence to predict the BRAFV600E mutation in patients with thyroid cancer

    Get PDF
    Purpose: To investigate whether a computer-aided diagnosis (CAD) program developed using the deep learning convolutional neural network (CNN) on neck US images can predict the BRAFV600E mutation in thyroid cancer. Methods: 469 thyroid cancers in 469 patients were included in this retrospective study. A CAD program recently developed using the deep CNN provided risks of malignancy (0-100%) as well as binary results (cancer or not). Using the CAD program, we calculated the risk of malignancy based on a US image of each thyroid nodule (CAD value). Univariate and multivariate logistic regression analyses were performed including patient demographics, the American College of Radiology (ACR) Thyroid Imaging, Reporting and Data System (TIRADS) categories and risks of malignancy calculated through CAD to identify independent predictive factors for the BRAFV600E mutation in thyroid cancer. The predictive power of the CAD value and final multivariable model for the BRAFV600E mutation in thyroid cancer were measured using the area under the receiver operating characteristic (ROC) curves. Results: In this study, 380 (81%) patients were positive and 89 (19%) patients were negative for the BRAFV600E mutation. On multivariate analysis, older age (OR = 1.025, p = 0.018), smaller size (OR = 0.963, p = 0.006), and higher CAD value (OR = 1.016, p = 0.004) were significantly associated with the BRAFV600E mutation. The CAD value yielded an AUC of 0.646 (95% CI: 0.576, 0.716) for predicting the BRAFV600E mutation, while the multivariable model yielded an AUC of 0.706 (95% CI: 0.576, 0.716). The multivariable model showed significantly better performance than the CAD value alone (p = 0.004). Conclusion: Deep learning-based CAD for thyroid US can help us predict the BRAFV600E mutation in thyroid cancer. More multi-center studies with more cases are needed to further validate our study results.ope

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Novel Deep Learning Models for Medical Imaging Analysis

    Get PDF
    abstract: Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Deep-Learning-Based Computer- Aided Systems for Breast Cancer Imaging: A Critical Review

    Full text link
    [EN] This paper provides a critical review of the literature on deep learning applications in breast tumor diagnosis using ultrasound and mammography images. It also summarizes recent advances in computer-aided diagnosis/detection (CAD) systems, which make use of new deep learning methods to automatically recognize breast images and improve the accuracy of diagnoses made by radiologists. This review is based upon published literature in the past decade (January 2010-January 2020), where we obtained around 250 research articles, and after an eligibility process, 59 articles were presented in more detail. The main findings in the classification process revealed that new DL-CAD methods are useful and effective screening tools for breast cancer, thus reducing the need for manual feature extraction. The breast tumor research community can utilize this survey as a basis for their current and future studies.This project has been co-financed by the Spanish Government Grant PID2019-107790RB-C22, "Software development for a continuous PET crystal systems applied to breast cancer".Jiménez-Gaona, Y.; Rodríguez Álvarez, MJ.; Lakshminarayanan, V. (2020). Deep-Learning-Based Computer- Aided Systems for Breast Cancer Imaging: A Critical Review. Applied Sciences. 10(22):1-29. https://doi.org/10.3390/app10228298S1291022Jemal, A., Bray, F., Center, M. M., Ferlay, J., Ward, E., & Forman, D. (2011). Global cancer statistics. CA: A Cancer Journal for Clinicians, 61(2), 69-90. doi:10.3322/caac.20107Gao, F., Chia, K.-S., Ng, F.-C., Ng, E.-H., & Machin, D. (2002). Interval cancers following breast cancer screening in Singaporean women. International Journal of Cancer, 101(5), 475-479. doi:10.1002/ijc.10636Munir, K., Elahi, H., Ayub, A., Frezza, F., & Rizzi, A. (2019). Cancer Diagnosis Using Deep Learning: A Bibliographic Review. Cancers, 11(9), 1235. doi:10.3390/cancers11091235Nahid, A.-A., & Kong, Y. (2017). Involvement of Machine Learning for Breast Cancer Image Classification: A Survey. Computational and Mathematical Methods in Medicine, 2017, 1-29. doi:10.1155/2017/3781951Ramadan, S. Z. (2020). Methods Used in Computer-Aided Diagnosis for Breast Cancer Detection Using Mammograms: A Review. Journal of Healthcare Engineering, 2020, 1-21. doi:10.1155/2020/9162464CHAN, H.-P., DOI, K., VYBRONY, C. J., SCHMIDT, R. A., METZ, C. E., LAM, K. L., … MACMAHON, H. (1990). Improvement in Radiologists?? Detection of Clustered Microcalcifications on Mammograms. Investigative Radiology, 25(10), 1102-1110. doi:10.1097/00004424-199010000-00006Olsen, O., & Gøtzsche, P. C. (2001). Cochrane review on screening for breast cancer with mammography. The Lancet, 358(9290), 1340-1342. doi:10.1016/s0140-6736(01)06449-2Mann, R. M., Kuhl, C. K., Kinkel, K., & Boetes, C. (2008). Breast MRI: guidelines from the European Society of Breast Imaging. European Radiology, 18(7), 1307-1318. doi:10.1007/s00330-008-0863-7Jalalian, A., Mashohor, S. B. T., Mahmud, H. R., Saripan, M. I. B., Ramli, A. R. B., & Karasfi, B. (2013). Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review. Clinical Imaging, 37(3), 420-426. doi:10.1016/j.clinimag.2012.09.024Sarno, A., Mettivier, G., & Russo, P. (2015). Dedicated breast computed tomography: Basic aspects. Medical Physics, 42(6Part1), 2786-2804. doi:10.1118/1.4919441Njor, S., Nyström, L., Moss, S., Paci, E., Broeders, M., Segnan, N., & Lynge, E. (2012). Breast Cancer Mortality in Mammographic Screening in Europe: A Review of Incidence-Based Mortality Studies. Journal of Medical Screening, 19(1_suppl), 33-41. doi:10.1258/jms.2012.012080Morrell, S., Taylor, R., Roder, D., & Dobson, A. (2012). Mammography screening and breast cancer mortality in Australia: an aggregate cohort study. Journal of Medical Screening, 19(1), 26-34. doi:10.1258/jms.2012.011127Marmot, M. G., Altman, D. G., Cameron, D. A., Dewar, J. A., Thompson, S. G., & Wilcox, M. (2013). The benefits and harms of breast cancer screening: an independent review. British Journal of Cancer, 108(11), 2205-2240. doi:10.1038/bjc.2013.177Pisano, E. D., Gatsonis, C., Hendrick, E., Yaffe, M., Baum, J. K., Acharyya, S., … Rebner, M. (2005). Diagnostic Performance of Digital versus Film Mammography for Breast-Cancer Screening. New England Journal of Medicine, 353(17), 1773-1783. doi:10.1056/nejmoa052911Carney, P. A., Miglioretti, D. L., Yankaskas, B. C., Kerlikowske, K., Rosenberg, R., Rutter, C. M., … Ballard-Barbash, R. (2003). Individual and Combined Effects of Age, Breast Density, and Hormone Replacement Therapy Use on the Accuracy of Screening Mammography. Annals of Internal Medicine, 138(3), 168. doi:10.7326/0003-4819-138-3-200302040-00008Woodard, D. B., Gelfand, A. E., Barlow, W. E., & Elmore, J. G. (2007). Performance assessment for radiologists interpreting screening mammography. Statistics in Medicine, 26(7), 1532-1551. doi:10.1002/sim.2633Cole, E. B., Pisano, E. D., Kistner, E. O., Muller, K. E., Brown, M. E., Feig, S. A., … Braeuning, M. P. (2003). Diagnostic Accuracy of Digital Mammography in Patients with Dense Breasts Who Underwent Problem-solving Mammography: Effects of Image Processing and Lesion Type. Radiology, 226(1), 153-160. doi:10.1148/radiol.2261012024Boyd, N. F., Guo, H., Martin, L. J., Sun, L., Stone, J., Fishell, E., … Yaffe, M. J. (2007). Mammographic Density and the Risk and Detection of Breast Cancer. New England Journal of Medicine, 356(3), 227-236. doi:10.1056/nejmoa062790Bird, R. E., Wallace, T. W., & Yankaskas, B. C. (1992). Analysis of cancers missed at screening mammography. Radiology, 184(3), 613-617. doi:10.1148/radiology.184.3.1509041Kerlikowske, K. (2000). Performance of Screening Mammography among Women with and without a First-Degree Relative with Breast Cancer. Annals of Internal Medicine, 133(11), 855. doi:10.7326/0003-4819-133-11-200012050-00009Nunes, F. L. S., Schiabel, H., & Goes, C. E. (2006). Contrast Enhancement in Dense Breast Images to Aid Clustered Microcalcifications Detection. Journal of Digital Imaging, 20(1), 53-66. doi:10.1007/s10278-005-6976-5Dinnes, J., Moss, S., Melia, J., Blanks, R., Song, F., & Kleijnen, J. (2001). Effectiveness and cost-effectiveness of double reading of mammograms in breast cancer screening: findings of a systematic review. The Breast, 10(6), 455-463. doi:10.1054/brst.2001.0350Robinson, P. J. (1997). Radiology’s Achilles’ heel: error and variation in the interpretation of the Röntgen image. The British Journal of Radiology, 70(839), 1085-1098. doi:10.1259/bjr.70.839.9536897Rangayyan, R. M., Ayres, F. J., & Leo Desautels, J. E. (2007). A review of computer-aided diagnosis of breast cancer: Toward the detection of subtle signs. Journal of the Franklin Institute, 344(3-4), 312-348. doi:10.1016/j.jfranklin.2006.09.003Vyborny, C. J., Giger, M. L., & Nishikawa, R. M. (2000). COMPUTER-AIDED DETECTION AND DIAGNOSIS OF BREAST CANCER. Radiologic Clinics of North America, 38(4), 725-740. doi:10.1016/s0033-8389(05)70197-4Giger, M. L. (2018). Machine Learning in Medical Imaging. Journal of the American College of Radiology, 15(3), 512-520. doi:10.1016/j.jacr.2017.12.028Xu, Y., Wang, Y., Yuan, J., Cheng, Q., Wang, X., & Carson, P. L. (2019). Medical breast ultrasound image segmentation by machine learning. Ultrasonics, 91, 1-9. doi:10.1016/j.ultras.2018.07.006Shan, J., Alam, S. K., Garra, B., Zhang, Y., & Ahmed, T. (2016). Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods. Ultrasound in Medicine & Biology, 42(4), 980-988. doi:10.1016/j.ultrasmedbio.2015.11.016Zhang, Q., Xiao, Y., Dai, W., Suo, J., Wang, C., Shi, J., & Zheng, H. (2016). Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics, 72, 150-157. doi:10.1016/j.ultras.2016.08.004Cheng, J.-Z., Ni, D., Chou, Y.-H., Qin, J., Tiu, C.-M., Chang, Y.-C., … Chen, C.-M. (2016). Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Scientific Reports, 6(1). doi:10.1038/srep24454Shin, S. Y., Lee, S., Yun, I. D., Kim, S. M., & Lee, K. M. (2019). Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images. IEEE Transactions on Medical Imaging, 38(3), 762-774. doi:10.1109/tmi.2018.2872031Wang, J., Ding, H., Bidgoli, F. A., Zhou, B., Iribarren, C., Molloi, S., & Baldi, P. (2017). Detecting Cardiovascular Disease from Mammograms With Deep Learning. IEEE Transactions on Medical Imaging, 36(5), 1172-1181. doi:10.1109/tmi.2017.2655486Kooi, T., Litjens, G., van Ginneken, B., Gubern-Mérida, A., Sánchez, C. I., Mann, R., … Karssemeijer, N. (2017). Large scale deep learning for computer aided detection of mammographic lesions. Medical Image Analysis, 35, 303-312. doi:10.1016/j.media.2016.07.007Debelee, T. G., Schwenker, F., Ibenthal, A., & Yohannes, D. (2019). Survey of deep learning in breast cancer image analysis. Evolving Systems, 11(1), 143-163. doi:10.1007/s12530-019-09297-2Keen, J. D., Keen, J. M., & Keen, J. E. (2018). Utilization of Computer-Aided Detection for Digital Screening Mammography in the United States, 2008 to 2016. Journal of the American College of Radiology, 15(1), 44-48. doi:10.1016/j.jacr.2017.08.033Henriksen, E. L., Carlsen, J. F., Vejborg, I. M., Nielsen, M. B., & Lauridsen, C. A. (2018). The efficacy of using computer-aided detection (CAD) for detection of breast cancer in mammography screening: a systematic review. Acta Radiologica, 60(1), 13-18. doi:10.1177/0284185118770917Gao, Y., Geras, K. J., Lewin, A. A., & Moy, L. (2019). New Frontiers: An Update on Computer-Aided Diagnosis for Breast Imaging in the Age of Artificial Intelligence. American Journal of Roentgenology, 212(2), 300-307. doi:10.2214/ajr.18.20392Pacilè, S., Lopez, J., Chone, P., Bertinotti, T., Grouin, J. M., & Fillard, P. (2020). Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool. Radiology: Artificial Intelligence, 2(6), e190208. doi:10.1148/ryai.2020190208Huynh, B. Q., Li, H., & Giger, M. L. (2016). Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. Journal of Medical Imaging, 3(3), 034501. doi:10.1117/1.jmi.3.3.034501Yap, M. H., Pons, G., Marti, J., Ganau, S., Sentis, M., Zwiggelaar, R., … Marti, R. (2018). Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks. IEEE Journal of Biomedical and Health Informatics, 22(4), 1218-1226. doi:10.1109/jbhi.2017.2731873Moon, W. K., Lee, Y.-W., Ke, H.-H., Lee, S. H., Huang, C.-S., & Chang, R.-F. (2020). Computer‐aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks. Computer Methods and Programs in Biomedicine, 190, 105361. doi:10.1016/j.cmpb.2020.105361LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi:10.1038/nature14539Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2017). Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236-1246. doi:10.1093/bib/bbx044Shin, H.-C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., … Summers, R. M. (2016). Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on Medical Imaging, 35(5), 1285-1298. doi:10.1109/tmi.2016.2528162Lee, J.-G., Jun, S., Cho, Y.-W., Lee, H., Kim, G. B., Seo, J. B., & Kim, N. (2017). Deep Learning in Medical Imaging: General Overview. Korean Journal of Radiology, 18(4), 570. doi:10.3348/kjr.2017.18.4.570Suzuki, K. (2017). Overview of deep learning in medical imaging. Radiological Physics and Technology, 10(3), 257-273. doi:10.1007/s12194-017-0406-5Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2010). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. International Journal of Surgery, 8(5), 336-341. doi:10.1016/j.ijsu.2010.02.007Khan, K. S., Kunz, R., Kleijnen, J., & Antes, G. (2003). Five Steps to Conducting a Systematic Review. Journal of the Royal Society of Medicine, 96(3), 118-121. doi:10.1177/014107680309600304Han, S., Kang, H.-K., Jeong, J.-Y., Park, M.-H., Kim, W., Bang, W.-C., & Seong, Y.-K. (2017). A deep learning framework for supporting the classification of breast lesions in ultrasound images. Physics in Medicine & Biology, 62(19), 7714-7728. doi:10.1088/1361-6560/aa82ecMoreira, I. C., Amaral, I., Domingues, I., Cardoso, A., Cardoso, M. J., & Cardoso, J. S. (2012). INbreast. Academic Radiology, 19(2), 236-248. doi:10.1016/j.acra.2011.09.014Abdelhafiz, D., Yang, C., Ammar, R., & Nabavi, S. (2019). Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics, 20(S11). doi:10.1186/s12859-019-2823-4Byra, M., Jarosik, P., Szubert, A., Galperin, M., Ojeda-Fournier, H., Olson, L., … Andre, M. (2020). Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network. Biomedical Signal Processing and Control, 61, 102027. doi:10.1016/j.bspc.2020.102027Jiao, Z., Gao, X., Wang, Y., & Li, J. (2016). A deep feature based framework for breast masses classification. Neurocomputing, 197, 221-231. doi:10.1016/j.neucom.2016.02.060Arevalo, J., González, F. A., Ramos-Pollán, R., Oliveira, J. L., & Guevara Lopez, M. A. (2016). Representation learning for mammography mass lesion classification with convolutional neural networks. Computer Methods and Programs in Biomedicine, 127, 248-257. doi:10.1016/j.cmpb.2015.12.014Peng, W., Mayorga, R. V., & Hussein, E. M. A. (2016). An automated confirmatory system for analysis of mammograms. Computer Methods and Programs in Biomedicine, 125, 134-144. doi:10.1016/j.cmpb.2015.09.019Al-Dhabyani, W., Gomaa, M., Khaled, H., & Fahmy, A. (2020). Dataset of breast ultrasound images. Data in Brief, 28, 104863. doi:10.1016/j.dib.2019.104863Piotrzkowska-Wróblewska, H., Dobruch-Sobczak, K., Byra, M., & Nowicki, A. (2017). Open access database of raw ultrasonic signals acquired from malignant and benign breast lesions. Medical Physics, 44(11), 6105-6109. doi:10.1002/mp.12538Fujita, H. (2020). AI-based computer-aided diagnosis (AI-CAD): the latest review to read first. Radiological Physics and Technology, 13(1), 6-19. doi:10.1007/s12194-019-00552-4Sengupta, S., Singh, A., Leopold, H. A., Gulati, T., & Lakshminarayanan, V. (2020). Ophthalmic diagnosis using deep learning with fundus images – A critical review. Artificial Intelligence in Medicine, 102, 101758. doi:10.1016/j.artmed.2019.101758Ganesan, K., Acharya, U. R., Chua, K. C., Min, L. C., & Abraham, K. T. (2013). Pectoral muscle segmentation: A review. Computer Methods and Programs in Biomedicine, 110(1), 48-57. doi:10.1016/j.cmpb.2012.10.020Huang, Q., Luo, Y., & Zhang, Q. (2017). Breast ultrasound image segmentation: a survey. International Journal of Computer Assisted Radiology and Surgery, 12(3), 493-507. doi:10.1007/s11548-016-1513-1Noble, J. A., & Boukerroui, D. (2006). Ultrasound image segmentation: a survey. IEEE Transactions on Medical Imaging, 25(8), 987-1010. doi:10.1109/tmi.2006.877092Kallergi, M., Woods, K., Clarke, L. P., Qian, W., & Clark, R. A. (1992). Image segmentation in digital mammography: Comparison of local thresholding and region growing algorithms. Computerized Medical Imaging and Graphics, 16(5), 323-331. doi:10.1016/0895-6111(92)90145-yTsantis, S., Dimitropoulos, N., Cavouras, D., & Nikiforidis, G. (2006). A hybrid multi-scale model for thyroid nodule boundary detection on ultrasound images. Computer Methods and Programs in Biomedicine, 84(2-3), 86-98. doi:10.1016/j.cmpb.2006.09.006Ilesanmi, A. E., Idowu, O. P., & Makhanov, S. S. (2020). Multiscale superpixel method for segmentation of breast ultrasound. Computers in Biology and Medicine, 125, 103879. doi:10.1016/j.compbiomed.2020.103879Chen, D.-R., Chang, R.-F., Kuo, W.-J., Chen, M.-C., & Huang, Y. .-L. (2002). Diagnosis of breast tumors with sonographic texture analysis using wavelet transform and neural networks. Ultrasound in Medicine & Biology, 28(10), 1301-1310. doi:10.1016/s0301-5629(02)00620-8Cheng, H. D., Shan, J., Ju, W., Guo, Y., & Zhang, L. (2010). Automated breast cancer detection and classification using ultrasound images: A survey. Pattern Recognition, 43(1), 299-317. doi:10.1016/j.patcog.2009.05.012Chan, H.-P., Wei, D., Helvie, M. A., Sahiner, B., Adler, D. D., Goodsitt, M. M., & Petrick, N. (1995). Computer-aided classification of mammographic masses and normal tissue: linear discriminant analysis in texture feature space. Physics in Medicine and Biology, 40(5), 857-876. doi:10.1088/0031-9155/40/5/010Tanaka, T., Torii, S., Kabuta, I., Shimizu, K., & Tanaka, M. (2007). Pattern Classification of Nevus with Texture Analysis. IEEJ Transactions on Electrical and Electronic Engineering, 3(1), 143-150. doi:10.1002/tee.20246Singh, B., Jain, V. K., & Singh, S. (2014). Mammogram Mass Classification Using Support Vector Machine with Texture, Shape Features and Hierarchical Centroid Method. Journal of Medical Imaging and Health Informatics, 4(5), 687-696. doi:10.1166/jmihi.2014.1312Pal, N. R., Bhowmick, B., Patel, S. K., Pal, S., & Das, J. (2008). A multi-stage neural network aided system for detection of microcalcifications in digitized mammograms. Neurocomputing, 71(13-15), 2625-2634. doi:10.1016/j.neucom.2007.06.015Ayer, T., Chen, Q., & Burnside, E. S. (2013). Artificial Neural Networks in Mammography Interpretation and Diagnostic Decision Making. Computational and Mathematical Methods in Medicine, 2013, 1-10. doi:10.1155/2013/832509Sumbaly, R., Vishnusri, N., & Jeyalatha, S. (2014). Diagnosis of Breast Cancer using Decision Tree Data Mining Technique. International Journal of Computer Applications, 98(10), 16-24. doi:10.5120/17219-7456Landwehr, N., Hall, M., & Frank, E. (2005). Logistic Model Trees. Machine Learning, 59(1-2), 161-205. doi:10.1007/s10994-005-0466-3Abdel-Zaher, A. M., & Eldeib, A. M. (2016). Breast cancer classification using deep belief networks. Expert Systems with Applications, 46, 139-144. doi:10.1016/j.eswa.2015.10.015Nishikawa, R. M., Giger, M. L., Doi, K., Metz, C. E., Yin, F.-F., Vyborny, C. J., & Schmidt, R. A. (1994). Effect of case selection on the performance of computer-aided detection schemes. Medical Physics, 21(2), 265-269. doi:10.1118/1.597287Guo, R., Lu, G., Qin, B., & Fei, B. (2018). Ultrasound Imaging Technologies for Breast Cancer Detection and Management: A Review. Ultrasound in Medicine & Biology, 44(1), 37-70. doi:10.1016/j.ultrasmedbio.2017.09.012Kang, C.-C., Wang, W.-J., & Kang, C.-H. (2012). Image segmentation with complicated background by using seeded region growing. AEU - International Journal of Electronics and Communications, 66(9), 767-771. doi:10.1016/j.aeue.2012.01.011Prabusankarlal, K. M., Thirumoorthy, P., & Manavalan, R. (2014). Computer Aided Breast Cancer Diagnosis Techniques in Ultrasound: A Survey. Journal of Medical Imaging and Health Informatics, 4(3), 331-349. doi:10.1166/jmihi.2014.1269Abdallah, Y. M., Elgak, S., Zain, H., Rafiq, M., A. Ebaid, E., & A. Elnaema, A. (2018). Breast cancer detection using image enhancement and segmentation algorithms. Biomedical Research, 29(20). doi:10.4066/biomedicalresearch.29-18-1106K.U, S., & S, G. R. (2016). Objective Quality Assessment of Image Enhancement Methods in Digital Mammography - A Comparative Study. Signal & Image Processing : An International Journal, 7(4), 01-13. doi:10.5121/sipij.2016.7401Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., … Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355-368. doi:10.1016/s0734-189x(87)80186-xPisano, E. D., Zong, S., Hemminger, B. M., DeLuca, M., Johnston, R. E., Muller, K., … Pizer, S. M. (1998). Contrast Limited Adaptive Histogram Equalization image processing to improve the detection of simulated spiculations in dense mammograms. Journal of Digital Imaging, 11(4), 193-200. doi:10.1007/bf03178082Wan, J., Yin, H., Chong, A.-X., & Liu, Z.-H. (2020). Progressive residual networks for image super-resolution. Applied Intelligence, 50(5), 1620-1632. doi:10.1007/s10489-019-01548-8Umehara, K., Ota, J., & Ishida, T. (2017). Super-Resolution Imaging of Mammograms Based on the Super-Resolution Convolutional Neural Network. Open Journal of Medical Imaging, 07(04), 180-195. doi:10.4236/ojmi.2017.74018Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image Super-Resolution Using Deep Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 295-307. doi:10.1109/tpami.2015.2439281Jiang, Y., & Li, J. (2020). Generative Adversarial Network for Image Super-Resolution Combining Texture Loss. Applied Sciences, 10(5), 1729. doi:10.3390/app10051729Schultz, R. R., & Stevenson, R. L. (1994). A Bayesian approach to image expansion for improved definition. IEEE Transactions on Image Processing, 3(3), 233-242. doi:10.1109/83.287017Lei Zhang, & Xiaolin Wu. (2006). An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Transactions on Image Processing, 15(8), 2226-2238. doi:10.1109/tip.2006.877407Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on Image Data Augmentation for Deep Learning. Journal of Big Data, 6(1). doi:10.1186/s40537-019-0197-0Weiss, K., Khoshgoftaar, T. M., & Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3(1). doi:10.1186/s40537-016-0043-6Ling Shao, Fan Zhu, & Xuelong Li. (2015). Transfer Learning for Visual Categorization: A Survey. IEEE Transactions on Neural Networks and Learning Syste

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Digital Image Processing Applications

    Get PDF
    Digital image processing can refer to a wide variety of techniques, concepts, and applications of different types of processing for different purposes. This book provides examples of digital image processing applications and presents recent research on processing concepts and techniques. Chapters cover such topics as image processing in medical physics, binarization, video processing, and more

    Classification of the Microstructural Elements of the Vegetal Tissue of the Pumpkin (Cucurbita pepo L.) Using Convolutional Neural Networks

    Get PDF
    [EN] Although knowledge of the microstructure of food of vegetal origin helps us to understand the behavior of food materials, the variability in the microstructural elements complicates this analysis. In this regard, the construction of learning models that represent the actual microstructures of the tissue is important to extract relevant information and advance in the comprehension of such behavior. Consequently, the objective of this research is to compare two machine learning techniques¿Convolutional Neural Networks (CNN) and Radial Basis Neural Networks (RBNN)¿when used to enhance its microstructural analysis. Two main contributions can be highlighted from this research. First, a method is proposed to automatically analyze the microstructural elements of vegetal tissue; and second, a comparison was conducted to select a classifier to discriminate between tissue structures. For the comparison, a database of microstructural elements images was obtained from pumpkin (Cucurbita pepo L.) micrographs. Two classifiers were implemented using CNN and RBNN, and statistical performance metrics were computed using a 5-fold cross-validation scheme. This process was repeated one hundred times with a random selection of images in each repetition. The comparison showed that the classifiers based on CNN produced a better fit, obtaining F1¿score average of 89.42% in front of 83.83% for RBNN. In this study, the performance of classifiers based on CNN was significantly higher compared to those based on RBNN in the discrimination of microstructural elements of vegetable foods.Oblitas, J.; Mejía, J.; De-La-Torre, M.; Avila-George, H.; Seguí Gil, L.; Mayor López, L.; Ibarz, A.... (2021). Classification of the Microstructural Elements of the Vegetal Tissue of the Pumpkin (Cucurbita pepo L.) Using Convolutional Neural Networks. Applied Sciences. 11(4):1-13. https://doi.org/10.3390/app11041581S113114Betoret, E., Betoret, N., Rocculi, P., & Dalla Rosa, M. (2015). Strategies to improve food functionality: Structure–property relationships on high pressures homogenization, vacuum impregnation and drying technologies. Trends in Food Science & Technology, 46(1), 1-12. doi:10.1016/j.tifs.2015.07.006Fito, P., LeMaguer, M., Betoret, N., & Fito, P. J. (2007). Advanced food process engineering to model real foods and processes: The «SAFES» methodology. Journal of Food Engineering, 83(2), 173-185. doi:10.1016/j.jfoodeng.2007.02.017Topete-Betancourt, A., De D. Figueroa-Cárdenas, J., Morales-Sánchez, E., Arámbula-Villa, G., & Pérez-Robles, J. F. (2019). EVALUATION OF THE MECHANISM OF OIL UPTAKE AND WATER LOSS DURING DEEP-FAT FRYING OF TORTILLA CHIPS. Revista Mexicana de Ingeniería Química, 19(1), 409-422. doi:10.24275/rmiq/alim605Silva-Jara, J. M., López-Cruz, R., Ragazzo-Sánchez, J. A., & Calderón-Santoyo, M. (2019). Antagonistic microorganisms efficiency to suppress damage caused by Colletotrichum gloeosporioides in papaya crop: Perspectives and challenges. Revista Mexicana de Ingeniería Química, 19(2), 839-849. doi:10.24275/rmiq/bio788Aguilera, J. M. (2005). Why food microstructure? Journal of Food Engineering, 67(1-2), 3-11. doi:10.1016/j.jfoodeng.2004.05.050Pieczywek, P. M., & Zdunek, A. (2012). Automatic classification of cells and intercellular spaces of apple tissue. Computers and Electronics in Agriculture, 81, 72-78. doi:10.1016/j.compag.2011.11.006Mayor, L., Pissarra, J., & Sereno, A. M. (2008). Microstructural changes during osmotic dehydration of parenchymatic pumpkin tissue. Journal of Food Engineering, 85(3), 326-339. doi:10.1016/j.jfoodeng.2007.06.038Mebatsion, H. K., Verboven, P., Verlinden, B. E., Ho, Q. T., Nguyen, T. A., & Nicolaï, B. M. (2006). Microscale modelling of fruit tissue using Voronoi tessellations. Computers and Electronics in Agriculture, 52(1-2), 36-48. doi:10.1016/j.compag.2006.01.002Oblitas-Cruz, J. F., Castro-Silupu, W. M., & Mayor-López, L. (2015). Effect of different combinations of size and shape parameters in the percentage error of classification of structural elements in vegetal tissue of the pumpkin Cucurbita pepo L. using probabilistic neural networks. Revista Facultad de Ingeniería Universidad de Antioquia, (78). doi:10.17533/udea.redin.n78a04Meng, N., Lam, E. Y., Tsia, K. K., & So, H. K.-H. (2019). Large-Scale Multi-Class Image-Based Cell Classification With Deep Learning. IEEE Journal of Biomedical and Health Informatics, 23(5), 2091-2098. doi:10.1109/jbhi.2018.2878878Adeshina, S. A., Adedigba, A. P., Adeniyi, A. A., & Aibinu, A. M. (2018). Breast Cancer Histopathology Image Classification with Deep Convolutional Neural Networks. 2018 14th International Conference on Electronics Computer and Computation (ICECCO). doi:10.1109/icecco.2018.8634690Aliyu, H. A., Sudirman, R., Abdul Razak, M. A., & Abd Wahab, M. A. (2018). Red Blood Cell Classification: Deep Learning Architecture Versus Support Vector Machine. 2018 2nd International Conference on BioSignal Analysis, Processing and Systems (ICBAPS). doi:10.1109/icbaps.2018.8527398Reddy, A. S. B., & Juliet, D. S. (2019). Transfer Learning with ResNet-50 for Malaria Cell-Image Classification. 2019 International Conference on Communication and Signal Processing (ICCSP). doi:10.1109/iccsp.2019.8697909Mayor, L., Moreira, R., & Sereno, A. M. (2011). Shrinkage, density, porosity and shape changes during dehydration of pumpkin (Cucurbita pepo L.) fruits. Journal of Food Engineering, 103(1), 29-37. doi:10.1016/j.jfoodeng.2010.08.031Castro, W., Oblitas, J., De-La-Torre, M., Cotrina, C., Bazan, K., & Avila-George, H. (2019). Classification of Cape Gooseberry Fruit According to its Level of Ripeness Using Machine Learning Techniques and Different Color Spaces. IEEE Access, 7, 27389-27400. doi:10.1109/access.2019.2898223Valdez-Morones, T., Perez-Espinosa, H., Avila-George, H., Oblitas, J., & Castro, W. (2018). An Android App for detecting damage on tobacco (Nicotiana tabacum L.) leaves caused by blue mold (Penospora tabacina Adam). 2018 7th International Conference On Software Process Improvement (CIMPS). doi:10.1109/cimps.2018.8625628Castro, W., Oblitas, J., Chuquizuta, T., & Avila-George, H. (2017). Application of image analysis to optimization of the bread-making process based on the acceptability of the crust color. Journal of Cereal Science, 74, 194-199. doi:10.1016/j.jcs.2017.02.002De-la -Torre, M., Zatarain, O., Avila-George, H., Muñoz, M., Oblitas, J., Lozada, R., … Castro, W. (2019). Multivariate Analysis and Machine Learning for Ripeness Classification of Cape Gooseberry Fruits. Processes, 7(12), 928. doi:10.3390/pr7120928Fernández-Navarro, F., Hervás-Martínez, C., Gutiérrez, P. A., & Carbonero-Ruz, M. (2011). Evolutionary q-Gaussian radial basis function neural networks for multiclassification. Neural Networks, 24(7), 779-784. doi:10.1016/j.neunet.2011.03.014Huang, W., Oh, S.-K., & Pedrycz, W. (2014). Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs). Neural Networks, 60, 166-181. doi:10.1016/j.neunet.2014.08.007Zhou, Y., Nejati, H., Do, T.-T., Cheung, N.-M., & Cheah, L. (2016). Image-based vehicle analysis using deep neural network: A systematic study. 2016 IEEE International Conference on Digital Signal Processing (DSP). doi:10.1109/icdsp.2016.7868561Toliupa, S., Tereikovskyi, I., Tereikovskyi, O., Tereikovska, L., Nakonechnyi, V., & Kulakov, Y. (2020). Keyboard Dynamic Analysis by Alexnet Type Neural Network. 2020 IEEE 15th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET). doi:10.1109/tcset49122.2020.235466Lu, S., Lu, Z., & Zhang, Y.-D. (2019). Pathological brain detection based on AlexNet and transfer learning. Journal of Computational Science, 30, 41-47. doi:10.1016/j.jocs.2018.11.008Kraus, O. Z., Ba, J. L., & Frey, B. J. (2016). Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics, 32(12), i52-i59. doi:10.1093/bioinformatics/btw252Du, C.-J., & Sun, D.-W. (2006). Learning techniques used in computer vision for food quality evaluation: a review. Journal of Food Engineering, 72(1), 39-55. doi:10.1016/j.jfoodeng.2004.11.017Brosnan, T., & Sun, D.-W. (2004). Improving quality inspection of food products by computer vision––a review. Journal of Food Engineering, 61(1), 3-16. doi:10.1016/s0260-8774(03)00183-3Baker, N., Lu, H., Erlikhman, G., & Kellman, P. J. (2018). Deep convolutional networks do not classify based on global object shape. PLOS Computational Biology, 14(12), e1006613. doi:10.1371/journal.pcbi.1006613Rohmatillah, M., Pramono, S. H., Rahmadwati, Suyono, H., & Sena, S. A. (2018). Automatic Cervical Cell Classification Using Features Extracted by Convolutional Neural Network. 2018 Electrical Power, Electronics, Communications, Controls and Informatics Seminar (EECCIS). doi:10.1109/eeccis.2018.8692888Sadanandan, S. K., Ranefall, P., & Wählby, C. (2016). Feature Augmented Deep Neural Networks for Segmentation of Cells. Computer Vision – ECCV 2016 Workshops, 231-243. doi:10.1007/978-3-319-46604-0_17Sharma, M., Bhave, A., & Janghel, R. R. (2019). White Blood Cell Classification Using Convolutional Neural Network. Soft Computing and Signal Processing, 135-143. doi:10.1007/978-981-13-3600-3_13Song, W., Li, S., Liu, J., Qin, H., Zhang, B., Zhang, S., & Hao, A. (2019). Multitask Cascade Convolution Neural Networks for Automatic Thyroid Nodule Detection and Recognition. IEEE Journal of Biomedical and Health Informatics, 23(3), 1215-1224. doi:10.1109/jbhi.2018.2852718Akram, S. U., Kannala, J., Eklund, L., & Heikkila, J. (2016). Cell proposal network for microscopy image analysis. 2016 IEEE International Conference on Image Processing (ICIP). doi:10.1109/icip.2016.753295
    corecore