3,133 research outputs found

    ICDAR2003 Page Segmentation Competition

    No full text
    There is a significant need to objectively evaluate layout analysis (page segmentation and region classification) methods. This paper describes the Page Segmentation Competition (modus operandi, dataset and evaluation criteria) held in the context of ICDAR2003 and presents the results of the evaluation of the candidate methods. The main objective of the competition was to evaluate such methods using scanned documents from commonly-occurring publications. The results indicate that although methods seem to be maturing, there is still a considerable need to develop robust methods that deal with everyday documents

    Adaptive Algorithms for Automated Processing of Document Images

    Get PDF
    Large scale document digitization projects continue to motivate interesting document understanding technologies such as script and language identification, page classification, segmentation and enhancement. Typically, however, solutions are still limited to narrow domains or regular formats such as books, forms, articles or letters and operate best on clean documents scanned in a controlled environment. More general collections of heterogeneous documents challenge the basic assumptions of state-of-the-art technology regarding quality, script, content and layout. Our work explores the use of adaptive algorithms for the automated analysis of noisy and complex document collections. We first propose, implement and evaluate an adaptive clutter detection and removal technique for complex binary documents. Our distance transform based technique aims to remove irregular and independent unwanted foreground content while leaving text content untouched. The novelty of this approach is in its determination of best approximation to clutter-content boundary with text like structures. Second, we describe a page segmentation technique called Voronoi++ for complex layouts which builds upon the state-of-the-art method proposed by Kise [Kise1999]. Our approach does not assume structured text zones and is designed to handle multi-lingual text in both handwritten and printed form. Voronoi++ is a dynamically adaptive and contextually aware approach that considers components' separation features combined with Docstrum [O'Gorman1993] based angular and neighborhood features to form provisional zone hypotheses. These provisional zones are then verified based on the context built from local separation and high-level content features. Finally, our research proposes a generic model to segment and to recognize characters for any complex syllabic or non-syllabic script, using font-models. This concept is based on the fact that font files contain all the information necessary to render text and thus a model for how to decompose them. Instead of script-specific routines, this work is a step towards a generic character and recognition scheme for both Latin and non-Latin scripts

    Mathematical Expression Recognition based on Probabilistic Grammars

    Full text link
    [EN] Mathematical notation is well-known and used all over the world. Humankind has evolved from simple methods representing countings to current well-defined math notation able to account for complex problems. Furthermore, mathematical expressions constitute a universal language in scientific fields, and many information resources containing mathematics have been created during the last decades. However, in order to efficiently access all that information, scientific documents have to be digitized or produced directly in electronic formats. Although most people is able to understand and produce mathematical information, introducing math expressions into electronic devices requires learning specific notations or using editors. Automatic recognition of mathematical expressions aims at filling this gap between the knowledge of a person and the input accepted by computers. This way, printed documents containing math expressions could be automatically digitized, and handwriting could be used for direct input of math notation into electronic devices. This thesis is devoted to develop an approach for mathematical expression recognition. In this document we propose an approach for recognizing any type of mathematical expression (printed or handwritten) based on probabilistic grammars. In order to do so, we develop the formal statistical framework such that derives several probability distributions. Along the document, we deal with the definition and estimation of all these probabilistic sources of information. Finally, we define the parsing algorithm that globally computes the most probable mathematical expression for a given input according to the statistical framework. An important point in this study is to provide objective performance evaluation and report results using public data and standard metrics. We inspected the problems of automatic evaluation in this field and looked for the best solutions. We also report several experiments using public databases and we participated in several international competitions. Furthermore, we have released most of the software developed in this thesis as open source. We also explore some of the applications of mathematical expression recognition. In addition to the direct applications of transcription and digitization, we report two important proposals. First, we developed mucaptcha, a method to tell humans and computers apart by means of math handwriting input, which represents a novel application of math expression recognition. Second, we tackled the problem of layout analysis of structured documents using the statistical framework developed in this thesis, because both are two-dimensional problems that can be modeled with probabilistic grammars. The approach developed in this thesis for mathematical expression recognition has obtained good results at different levels. It has produced several scientific publications in international conferences and journals, and has been awarded in international competitions.[ES] La notación matemática es bien conocida y se utiliza en todo el mundo. La humanidad ha evolucionado desde simples métodos para representar cuentas hasta la notación formal actual capaz de modelar problemas complejos. Además, las expresiones matemáticas constituyen un idioma universal en el mundo científico, y se han creado muchos recursos que contienen matemáticas durante las últimas décadas. Sin embargo, para acceder de forma eficiente a toda esa información, los documentos científicos han de ser digitalizados o producidos directamente en formatos electrónicos. Aunque la mayoría de personas es capaz de entender y producir información matemática, introducir expresiones matemáticas en dispositivos electrónicos requiere aprender notaciones especiales o usar editores. El reconocimiento automático de expresiones matemáticas tiene como objetivo llenar ese espacio existente entre el conocimiento de una persona y la entrada que aceptan los ordenadores. De este modo, documentos impresos que contienen fórmulas podrían digitalizarse automáticamente, y la escritura se podría utilizar para introducir directamente notación matemática en dispositivos electrónicos. Esta tesis está centrada en desarrollar un método para reconocer expresiones matemáticas. En este documento proponemos un método para reconocer cualquier tipo de fórmula (impresa o manuscrita) basado en gramáticas probabilísticas. Para ello, desarrollamos el marco estadístico formal que deriva varias distribuciones de probabilidad. A lo largo del documento, abordamos la definición y estimación de todas estas fuentes de información probabilística. Finalmente, definimos el algoritmo que, dada cierta entrada, calcula globalmente la expresión matemática más probable de acuerdo al marco estadístico. Un aspecto importante de este trabajo es proporcionar una evaluación objetiva de los resultados y presentarlos usando datos públicos y medidas estándar. Por ello, estudiamos los problemas de la evaluación automática en este campo y buscamos las mejores soluciones. Asimismo, presentamos diversos experimentos usando bases de datos públicas y hemos participado en varias competiciones internacionales. Además, hemos publicado como código abierto la mayoría del software desarrollado en esta tesis. También hemos explorado algunas de las aplicaciones del reconocimiento de expresiones matemáticas. Además de las aplicaciones directas de transcripción y digitalización, presentamos dos propuestas importantes. En primer lugar, desarrollamos mucaptcha, un método para discriminar entre humanos y ordenadores mediante la escritura de expresiones matemáticas, el cual representa una novedosa aplicación del reconocimiento de fórmulas. En segundo lugar, abordamos el problema de detectar y segmentar la estructura de documentos utilizando el marco estadístico formal desarrollado en esta tesis, dado que ambos son problemas bidimensionales que pueden modelarse con gramáticas probabilísticas. El método desarrollado en esta tesis para reconocer expresiones matemáticas ha obtenido buenos resultados a diferentes niveles. Este trabajo ha producido varias publicaciones en conferencias internacionales y revistas, y ha sido premiado en competiciones internacionales.[CA] La notació matemàtica és ben coneguda i s'utilitza a tot el món. La humanitat ha evolucionat des de simples mètodes per representar comptes fins a la notació formal actual capaç de modelar problemes complexos. A més, les expressions matemàtiques constitueixen un idioma universal al món científic, i s'han creat molts recursos que contenen matemàtiques durant les últimes dècades. No obstant això, per accedir de forma eficient a tota aquesta informació, els documents científics han de ser digitalitzats o produïts directament en formats electrònics. Encara que la majoria de persones és capaç d'entendre i produir informació matemàtica, introduir expressions matemàtiques en dispositius electrònics requereix aprendre notacions especials o usar editors. El reconeixement automàtic d'expressions matemàtiques té per objectiu omplir aquest espai existent entre el coneixement d'una persona i l'entrada que accepten els ordinadors. D'aquesta manera, documents impresos que contenen fórmules podrien digitalitzar-se automàticament, i l'escriptura es podria utilitzar per introduir directament notació matemàtica en dispositius electrònics. Aquesta tesi està centrada en desenvolupar un mètode per reconèixer expressions matemàtiques. En aquest document proposem un mètode per reconèixer qualsevol tipus de fórmula (impresa o manuscrita) basat en gramàtiques probabilístiques. Amb aquesta finalitat, desenvolupem el marc estadístic formal que deriva diverses distribucions de probabilitat. Al llarg del document, abordem la definició i estimació de totes aquestes fonts d'informació probabilística. Finalment, definim l'algorisme que, donada certa entrada, calcula globalment l'expressió matemàtica més probable d'acord al marc estadístic. Un aspecte important d'aquest treball és proporcionar una avaluació objectiva dels resultats i presentar-los usant dades públiques i mesures estàndard. Per això, estudiem els problemes de l'avaluació automàtica en aquest camp i busquem les millors solucions. Així mateix, presentem diversos experiments usant bases de dades públiques i hem participat en diverses competicions internacionals. A més, hem publicat com a codi obert la majoria del software desenvolupat en aquesta tesi. També hem explorat algunes de les aplicacions del reconeixement d'expressions matemàtiques. A més de les aplicacions directes de transcripció i digitalització, presentem dues propostes importants. En primer lloc, desenvolupem mucaptcha, un mètode per discriminar entre humans i ordinadors mitjançant l'escriptura d'expressions matemàtiques, el qual representa una nova aplicació del reconeixement de fórmules. En segon lloc, abordem el problema de detectar i segmentar l'estructura de documents utilitzant el marc estadístic formal desenvolupat en aquesta tesi, donat que ambdós són problemes bidimensionals que poden modelar-se amb gramàtiques probabilístiques. El mètode desenvolupat en aquesta tesi per reconèixer expressions matemàtiques ha obtingut bons resultats a diferents nivells. Aquest treball ha produït diverses publicacions en conferències internacionals i revistes, i ha sigut premiat en competicions internacionals.Álvaro Muñoz, F. (2015). Mathematical Expression Recognition based on Probabilistic Grammars [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/51665TESI

    Page layout analysis and classification in complex scanned documents

    Get PDF
    Page layout analysis has been extensively studied since the 1980`s, particularly after computers began to be used for document storage or database units. For efficient document storage and retrieval from a database, a paper document would be transformed into its electronic version. Algorithms and methodologies are used for document image analysis in order to segment a scanned document into different regions such as text, image or line regions. To contribute a novel approach in the field of page layout analysis and classification, this algorithm is developed for both RGB space and grey-scale scanned documents without requiring any specific document types, and scanning techniques. In this thesis, a page classification algorithm is proposed which mainly applies wavelet transform, Markov random field (MRF) and Hough transform to segment text, photo and strong edge/ line regions in both color and gray-scale scanned documents. The algorithm is developed to handle both simple and complex page layout structures and contents (text only vs. book cover that includes text, lines and/or photos). The methodology consists of five modules. In the first module, called pre-processing, image enhancements techniques such as image scaling, filtering, color space conversion or gamma correction are applied in order to reduce computation time and enhance the scanned document. The techniques, used to perform the classification, are employed on the one-fourth resolution input image in the CIEL*a*b* color space. In the second module, the text detection module uses wavelet analysis to generate a text-region candidate map which is enhanced by applying a Run Length Encoding (RLE) technique for verification purposes. The third module, photo detection, initially uses block-wise segmentation which is based on basis vector projection technique. Then, MRF with maximum a-posteriori (MAP) optimization framework is utilized to generate photo map. Next, Hough transform is applied to locate lines in the fourth module. Techniques for edge detection, edge linkages, and line-segment fitting are used to detect strong-edges in the module as well. After those three classification maps are obtained, in the last module a final page layout map is generated by using K-Means. Features are extracted to classify the intersection regions and merge into one classification map with K-Means clustering. The proposed technique is tested on several hundred images and its performance is validated by utilizing Confusion Matrix (CM). It shows that the technique achieves an average of 85% classification accuracy rate in text, photo, and background regions on a variety of scanned documents like articles, magazines, business-cards, dictionaries or newsletters etc. More importantly, it performs independently from a scanning process and an input scanned document (RGB or gray-scale) with comparable classification quality

    Liver segmentation in MRI: a fully automatic method based on stochastic partitions

    Full text link
    There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marker-controlled algorithm. To improve accuracy of selected contours, the gradient of the original image is successfully enhanced by applying a new variant of stochastic watershed. Moreover, a final classifier is performed in order to obtain the final liver mask. Optimal parameters of the method are tuned using a training dataset and then they are applied to the rest of studies (17 datasets). The obtained results (a Jaccard coefficient of 0.91 +/- 0.02) in comparison to other methods demonstrate that the new variant of stochastic watershed is a robust tool for automatic segmentation of the liver in MRI. (C) 2014 Elsevier Ireland Ltd. All rights reserved.This work has been supported by the MITYC under the project NaRALap (ref. TSI-020100-2009-189), partially by the CDTI under the project ONCOTIC (IDI-20101153), by Ministerio de Educacion y Ciencia Spain, Project Game Teen (TIN2010-20187) projects Consolider-C (SEJ2006-14301/PSIC), "CIBER of Physiopathology of Obesity and Nutrition, an initiative of ISCIII" and Excellence Research Program PROMETEO (Generalitat Valenciana. Conselleria de Educacion, 2008-157). We would like to express our gratitude to the Hospital Clinica Benidorm, for providing the MR datasets and to the radiologist team of Inscanner for the manual segmentation of the MR images.López-Mir, F.; Naranjo Ornedo, V.; Angulo, J.; Alcañiz Raya, ML.; Luna, L. (2014). Liver segmentation in MRI: a fully automatic method based on stochastic partitions. Computer Methods and Programs in Biomedicine. 114(1):11-28. https://doi.org/10.1016/j.cmpb.2013.12.022S1128114

    Basic research planning in mathematical pattern recognition and image analysis

    Get PDF
    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis

    A novel NMF-based DWI CAD framework for prostate cancer.

    Get PDF
    In this thesis, a computer aided diagnostic (CAD) framework for detecting prostate cancer in DWI data is proposed. The proposed CAD method consists of two frameworks that use nonnegative matrix factorization (NMF) to learn meaningful features from sets of high-dimensional data. The first technique, is a three dimensional (3D) level-set DWI prostate segmentation algorithm guided by a novel probabilistic speed function. This speed function is driven by the features learned by NMF from 3D appearance, shape, and spatial data. The second technique, is a probabilistic classifier that seeks to label a prostate segmented from DWI data as either alignat, contain cancer, or benign, containing no cancer. This approach uses a NMF-based feature fusion to create a feature space where data classes are clustered. In addition, using DWI data acquired at a wide range of b-values (i.e. magnetic field strengths) is investigated. Experimental analysis indicates that for both of these frameworks, using NMF producing more accurate segmentation and classification results, respectively, and that combining the information from DWI data at several b-values can assist in detecting prostate cancer

    Coronary X-ray angiography segmentation using Artificial Intelligence: a multicentric validation study of a deep learning model

    Get PDF
    © The Author(s) 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.Introduction: We previously developed an artificial intelligence (AI) model for automatic coronary angiography (CAG) segmentation, using deep learning. To validate this approach, the model was applied to a new dataset and results are reported. Methods: Retrospective selection of patients undergoing CAG and percutaneous coronary intervention or invasive physiology assessment over a one month period from four centers. A single frame was selected from images containing a lesion with a 50-99% stenosis (visual estimation). Automatic Quantitative Coronary Analysis (QCA) was performed with a validated software. Images were then segmented by the AI model. Lesion diameters, area overlap [based on true positive (TP) and true negative (TN) pixels] and a global segmentation score (GSS - 0 -100 points) - previously developed and published - were measured. Results: 123 regions of interest from 117 images across 90 patients were included. There were no significant differences between lesion diameter, percentage diameter stenosis and distal border diameter between the original/segmented images. There was a statistically significant albeit minor difference [0,19 mm (0,09-0,28)] regarding proximal border diameter. Overlap accuracy ((TP + TN)/(TP + TN + FP + FN)), sensitivity (TP / (TP + FN)) and Dice Score (2TP / (2TP + FN + FP)) between original/segmented images was 99,9%, 95,1% and 94,8%, respectively. The GSS was 92 (87-96), similar to the previously obtained value in the training dataset. Conclusion: the AI model was capable of accurate CAG segmentation across multiple performance metrics, when applied to a multicentric validation dataset. This paves the way for future research on its clinical uses.Open access funding provided by FCT|FCCN (b-on). Cardiovascular Center of the University of Lisbon, INESC-ID / Instituto Superior Técnico, University of Lisbon.info:eu-repo/semantics/publishedVersio

    Deep-Learning-Based Computer- Aided Systems for Breast Cancer Imaging: A Critical Review

    Full text link
    [EN] This paper provides a critical review of the literature on deep learning applications in breast tumor diagnosis using ultrasound and mammography images. It also summarizes recent advances in computer-aided diagnosis/detection (CAD) systems, which make use of new deep learning methods to automatically recognize breast images and improve the accuracy of diagnoses made by radiologists. This review is based upon published literature in the past decade (January 2010-January 2020), where we obtained around 250 research articles, and after an eligibility process, 59 articles were presented in more detail. The main findings in the classification process revealed that new DL-CAD methods are useful and effective screening tools for breast cancer, thus reducing the need for manual feature extraction. The breast tumor research community can utilize this survey as a basis for their current and future studies.This project has been co-financed by the Spanish Government Grant PID2019-107790RB-C22, "Software development for a continuous PET crystal systems applied to breast cancer".Jiménez-Gaona, Y.; Rodríguez Álvarez, MJ.; Lakshminarayanan, V. (2020). Deep-Learning-Based Computer- Aided Systems for Breast Cancer Imaging: A Critical Review. Applied Sciences. 10(22):1-29. https://doi.org/10.3390/app10228298S1291022Jemal, A., Bray, F., Center, M. M., Ferlay, J., Ward, E., & Forman, D. (2011). Global cancer statistics. CA: A Cancer Journal for Clinicians, 61(2), 69-90. doi:10.3322/caac.20107Gao, F., Chia, K.-S., Ng, F.-C., Ng, E.-H., & Machin, D. (2002). Interval cancers following breast cancer screening in Singaporean women. International Journal of Cancer, 101(5), 475-479. doi:10.1002/ijc.10636Munir, K., Elahi, H., Ayub, A., Frezza, F., & Rizzi, A. (2019). Cancer Diagnosis Using Deep Learning: A Bibliographic Review. Cancers, 11(9), 1235. doi:10.3390/cancers11091235Nahid, A.-A., & Kong, Y. (2017). Involvement of Machine Learning for Breast Cancer Image Classification: A Survey. Computational and Mathematical Methods in Medicine, 2017, 1-29. doi:10.1155/2017/3781951Ramadan, S. Z. (2020). Methods Used in Computer-Aided Diagnosis for Breast Cancer Detection Using Mammograms: A Review. Journal of Healthcare Engineering, 2020, 1-21. doi:10.1155/2020/9162464CHAN, H.-P., DOI, K., VYBRONY, C. J., SCHMIDT, R. A., METZ, C. E., LAM, K. L., … MACMAHON, H. (1990). Improvement in Radiologists?? Detection of Clustered Microcalcifications on Mammograms. Investigative Radiology, 25(10), 1102-1110. doi:10.1097/00004424-199010000-00006Olsen, O., & Gøtzsche, P. C. (2001). Cochrane review on screening for breast cancer with mammography. The Lancet, 358(9290), 1340-1342. doi:10.1016/s0140-6736(01)06449-2Mann, R. M., Kuhl, C. K., Kinkel, K., & Boetes, C. (2008). Breast MRI: guidelines from the European Society of Breast Imaging. European Radiology, 18(7), 1307-1318. doi:10.1007/s00330-008-0863-7Jalalian, A., Mashohor, S. B. T., Mahmud, H. R., Saripan, M. I. B., Ramli, A. R. B., & Karasfi, B. (2013). Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review. Clinical Imaging, 37(3), 420-426. doi:10.1016/j.clinimag.2012.09.024Sarno, A., Mettivier, G., & Russo, P. (2015). Dedicated breast computed tomography: Basic aspects. Medical Physics, 42(6Part1), 2786-2804. doi:10.1118/1.4919441Njor, S., Nyström, L., Moss, S., Paci, E., Broeders, M., Segnan, N., & Lynge, E. (2012). Breast Cancer Mortality in Mammographic Screening in Europe: A Review of Incidence-Based Mortality Studies. Journal of Medical Screening, 19(1_suppl), 33-41. doi:10.1258/jms.2012.012080Morrell, S., Taylor, R., Roder, D., & Dobson, A. (2012). Mammography screening and breast cancer mortality in Australia: an aggregate cohort study. Journal of Medical Screening, 19(1), 26-34. doi:10.1258/jms.2012.011127Marmot, M. G., Altman, D. G., Cameron, D. A., Dewar, J. A., Thompson, S. G., & Wilcox, M. (2013). The benefits and harms of breast cancer screening: an independent review. British Journal of Cancer, 108(11), 2205-2240. doi:10.1038/bjc.2013.177Pisano, E. D., Gatsonis, C., Hendrick, E., Yaffe, M., Baum, J. K., Acharyya, S., … Rebner, M. (2005). Diagnostic Performance of Digital versus Film Mammography for Breast-Cancer Screening. New England Journal of Medicine, 353(17), 1773-1783. doi:10.1056/nejmoa052911Carney, P. A., Miglioretti, D. L., Yankaskas, B. C., Kerlikowske, K., Rosenberg, R., Rutter, C. M., … Ballard-Barbash, R. (2003). Individual and Combined Effects of Age, Breast Density, and Hormone Replacement Therapy Use on the Accuracy of Screening Mammography. Annals of Internal Medicine, 138(3), 168. doi:10.7326/0003-4819-138-3-200302040-00008Woodard, D. B., Gelfand, A. E., Barlow, W. E., & Elmore, J. G. (2007). Performance assessment for radiologists interpreting screening mammography. Statistics in Medicine, 26(7), 1532-1551. doi:10.1002/sim.2633Cole, E. B., Pisano, E. D., Kistner, E. O., Muller, K. E., Brown, M. E., Feig, S. A., … Braeuning, M. P. (2003). Diagnostic Accuracy of Digital Mammography in Patients with Dense Breasts Who Underwent Problem-solving Mammography: Effects of Image Processing and Lesion Type. Radiology, 226(1), 153-160. doi:10.1148/radiol.2261012024Boyd, N. F., Guo, H., Martin, L. J., Sun, L., Stone, J., Fishell, E., … Yaffe, M. J. (2007). Mammographic Density and the Risk and Detection of Breast Cancer. New England Journal of Medicine, 356(3), 227-236. doi:10.1056/nejmoa062790Bird, R. E., Wallace, T. W., & Yankaskas, B. C. (1992). Analysis of cancers missed at screening mammography. Radiology, 184(3), 613-617. doi:10.1148/radiology.184.3.1509041Kerlikowske, K. (2000). Performance of Screening Mammography among Women with and without a First-Degree Relative with Breast Cancer. Annals of Internal Medicine, 133(11), 855. doi:10.7326/0003-4819-133-11-200012050-00009Nunes, F. L. S., Schiabel, H., & Goes, C. E. (2006). Contrast Enhancement in Dense Breast Images to Aid Clustered Microcalcifications Detection. Journal of Digital Imaging, 20(1), 53-66. doi:10.1007/s10278-005-6976-5Dinnes, J., Moss, S., Melia, J., Blanks, R., Song, F., & Kleijnen, J. (2001). Effectiveness and cost-effectiveness of double reading of mammograms in breast cancer screening: findings of a systematic review. The Breast, 10(6), 455-463. doi:10.1054/brst.2001.0350Robinson, P. J. (1997). Radiology’s Achilles’ heel: error and variation in the interpretation of the Röntgen image. The British Journal of Radiology, 70(839), 1085-1098. doi:10.1259/bjr.70.839.9536897Rangayyan, R. M., Ayres, F. J., & Leo Desautels, J. E. (2007). A review of computer-aided diagnosis of breast cancer: Toward the detection of subtle signs. Journal of the Franklin Institute, 344(3-4), 312-348. doi:10.1016/j.jfranklin.2006.09.003Vyborny, C. J., Giger, M. L., & Nishikawa, R. M. (2000). COMPUTER-AIDED DETECTION AND DIAGNOSIS OF BREAST CANCER. Radiologic Clinics of North America, 38(4), 725-740. doi:10.1016/s0033-8389(05)70197-4Giger, M. L. (2018). Machine Learning in Medical Imaging. Journal of the American College of Radiology, 15(3), 512-520. doi:10.1016/j.jacr.2017.12.028Xu, Y., Wang, Y., Yuan, J., Cheng, Q., Wang, X., & Carson, P. L. (2019). Medical breast ultrasound image segmentation by machine learning. Ultrasonics, 91, 1-9. doi:10.1016/j.ultras.2018.07.006Shan, J., Alam, S. K., Garra, B., Zhang, Y., & Ahmed, T. (2016). Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods. Ultrasound in Medicine & Biology, 42(4), 980-988. doi:10.1016/j.ultrasmedbio.2015.11.016Zhang, Q., Xiao, Y., Dai, W., Suo, J., Wang, C., Shi, J., & Zheng, H. (2016). Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics, 72, 150-157. doi:10.1016/j.ultras.2016.08.004Cheng, J.-Z., Ni, D., Chou, Y.-H., Qin, J., Tiu, C.-M., Chang, Y.-C., … Chen, C.-M. (2016). Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Scientific Reports, 6(1). doi:10.1038/srep24454Shin, S. Y., Lee, S., Yun, I. D., Kim, S. M., & Lee, K. M. (2019). Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images. IEEE Transactions on Medical Imaging, 38(3), 762-774. doi:10.1109/tmi.2018.2872031Wang, J., Ding, H., Bidgoli, F. A., Zhou, B., Iribarren, C., Molloi, S., & Baldi, P. (2017). Detecting Cardiovascular Disease from Mammograms With Deep Learning. IEEE Transactions on Medical Imaging, 36(5), 1172-1181. doi:10.1109/tmi.2017.2655486Kooi, T., Litjens, G., van Ginneken, B., Gubern-Mérida, A., Sánchez, C. I., Mann, R., … Karssemeijer, N. (2017). Large scale deep learning for computer aided detection of mammographic lesions. Medical Image Analysis, 35, 303-312. doi:10.1016/j.media.2016.07.007Debelee, T. G., Schwenker, F., Ibenthal, A., & Yohannes, D. (2019). Survey of deep learning in breast cancer image analysis. Evolving Systems, 11(1), 143-163. doi:10.1007/s12530-019-09297-2Keen, J. D., Keen, J. M., & Keen, J. E. (2018). Utilization of Computer-Aided Detection for Digital Screening Mammography in the United States, 2008 to 2016. Journal of the American College of Radiology, 15(1), 44-48. doi:10.1016/j.jacr.2017.08.033Henriksen, E. L., Carlsen, J. F., Vejborg, I. M., Nielsen, M. B., & Lauridsen, C. A. (2018). The efficacy of using computer-aided detection (CAD) for detection of breast cancer in mammography screening: a systematic review. Acta Radiologica, 60(1), 13-18. doi:10.1177/0284185118770917Gao, Y., Geras, K. J., Lewin, A. A., & Moy, L. (2019). New Frontiers: An Update on Computer-Aided Diagnosis for Breast Imaging in the Age of Artificial Intelligence. American Journal of Roentgenology, 212(2), 300-307. doi:10.2214/ajr.18.20392Pacilè, S., Lopez, J., Chone, P., Bertinotti, T., Grouin, J. M., & Fillard, P. (2020). Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool. Radiology: Artificial Intelligence, 2(6), e190208. doi:10.1148/ryai.2020190208Huynh, B. Q., Li, H., & Giger, M. L. (2016). Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. Journal of Medical Imaging, 3(3), 034501. doi:10.1117/1.jmi.3.3.034501Yap, M. H., Pons, G., Marti, J., Ganau, S., Sentis, M., Zwiggelaar, R., … Marti, R. (2018). Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks. IEEE Journal of Biomedical and Health Informatics, 22(4), 1218-1226. doi:10.1109/jbhi.2017.2731873Moon, W. K., Lee, Y.-W., Ke, H.-H., Lee, S. H., Huang, C.-S., & Chang, R.-F. (2020). Computer‐aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks. Computer Methods and Programs in Biomedicine, 190, 105361. doi:10.1016/j.cmpb.2020.105361LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi:10.1038/nature14539Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2017). Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236-1246. doi:10.1093/bib/bbx044Shin, H.-C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., … Summers, R. M. (2016). Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on Medical Imaging, 35(5), 1285-1298. doi:10.1109/tmi.2016.2528162Lee, J.-G., Jun, S., Cho, Y.-W., Lee, H., Kim, G. B., Seo, J. B., & Kim, N. (2017). Deep Learning in Medical Imaging: General Overview. Korean Journal of Radiology, 18(4), 570. doi:10.3348/kjr.2017.18.4.570Suzuki, K. (2017). Overview of deep learning in medical imaging. Radiological Physics and Technology, 10(3), 257-273. doi:10.1007/s12194-017-0406-5Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2010). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. International Journal of Surgery, 8(5), 336-341. doi:10.1016/j.ijsu.2010.02.007Khan, K. S., Kunz, R., Kleijnen, J., & Antes, G. (2003). Five Steps to Conducting a Systematic Review. Journal of the Royal Society of Medicine, 96(3), 118-121. doi:10.1177/014107680309600304Han, S., Kang, H.-K., Jeong, J.-Y., Park, M.-H., Kim, W., Bang, W.-C., & Seong, Y.-K. (2017). A deep learning framework for supporting the classification of breast lesions in ultrasound images. Physics in Medicine & Biology, 62(19), 7714-7728. doi:10.1088/1361-6560/aa82ecMoreira, I. C., Amaral, I., Domingues, I., Cardoso, A., Cardoso, M. J., & Cardoso, J. S. (2012). INbreast. Academic Radiology, 19(2), 236-248. doi:10.1016/j.acra.2011.09.014Abdelhafiz, D., Yang, C., Ammar, R., & Nabavi, S. (2019). Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics, 20(S11). doi:10.1186/s12859-019-2823-4Byra, M., Jarosik, P., Szubert, A., Galperin, M., Ojeda-Fournier, H., Olson, L., … Andre, M. (2020). Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network. Biomedical Signal Processing and Control, 61, 102027. doi:10.1016/j.bspc.2020.102027Jiao, Z., Gao, X., Wang, Y., & Li, J. (2016). A deep feature based framework for breast masses classification. Neurocomputing, 197, 221-231. doi:10.1016/j.neucom.2016.02.060Arevalo, J., González, F. A., Ramos-Pollán, R., Oliveira, J. L., & Guevara Lopez, M. A. (2016). Representation learning for mammography mass lesion classification with convolutional neural networks. Computer Methods and Programs in Biomedicine, 127, 248-257. doi:10.1016/j.cmpb.2015.12.014Peng, W., Mayorga, R. V., & Hussein, E. M. A. (2016). An automated confirmatory system for analysis of mammograms. Computer Methods and Programs in Biomedicine, 125, 134-144. doi:10.1016/j.cmpb.2015.09.019Al-Dhabyani, W., Gomaa, M., Khaled, H., & Fahmy, A. (2020). Dataset of breast ultrasound images. Data in Brief, 28, 104863. doi:10.1016/j.dib.2019.104863Piotrzkowska-Wróblewska, H., Dobruch-Sobczak, K., Byra, M., & Nowicki, A. (2017). Open access database of raw ultrasonic signals acquired from malignant and benign breast lesions. Medical Physics, 44(11), 6105-6109. doi:10.1002/mp.12538Fujita, H. (2020). AI-based computer-aided diagnosis (AI-CAD): the latest review to read first. Radiological Physics and Technology, 13(1), 6-19. doi:10.1007/s12194-019-00552-4Sengupta, S., Singh, A., Leopold, H. A., Gulati, T., & Lakshminarayanan, V. (2020). Ophthalmic diagnosis using deep learning with fundus images – A critical review. Artificial Intelligence in Medicine, 102, 101758. doi:10.1016/j.artmed.2019.101758Ganesan, K., Acharya, U. R., Chua, K. C., Min, L. C., & Abraham, K. T. (2013). Pectoral muscle segmentation: A review. Computer Methods and Programs in Biomedicine, 110(1), 48-57. doi:10.1016/j.cmpb.2012.10.020Huang, Q., Luo, Y., & Zhang, Q. (2017). Breast ultrasound image segmentation: a survey. International Journal of Computer Assisted Radiology and Surgery, 12(3), 493-507. doi:10.1007/s11548-016-1513-1Noble, J. A., & Boukerroui, D. (2006). Ultrasound image segmentation: a survey. IEEE Transactions on Medical Imaging, 25(8), 987-1010. doi:10.1109/tmi.2006.877092Kallergi, M., Woods, K., Clarke, L. P., Qian, W., & Clark, R. A. (1992). Image segmentation in digital mammography: Comparison of local thresholding and region growing algorithms. Computerized Medical Imaging and Graphics, 16(5), 323-331. doi:10.1016/0895-6111(92)90145-yTsantis, S., Dimitropoulos, N., Cavouras, D., & Nikiforidis, G. (2006). A hybrid multi-scale model for thyroid nodule boundary detection on ultrasound images. Computer Methods and Programs in Biomedicine, 84(2-3), 86-98. doi:10.1016/j.cmpb.2006.09.006Ilesanmi, A. E., Idowu, O. P., & Makhanov, S. S. (2020). Multiscale superpixel method for segmentation of breast ultrasound. Computers in Biology and Medicine, 125, 103879. doi:10.1016/j.compbiomed.2020.103879Chen, D.-R., Chang, R.-F., Kuo, W.-J., Chen, M.-C., & Huang, Y. .-L. (2002). Diagnosis of breast tumors with sonographic texture analysis using wavelet transform and neural networks. Ultrasound in Medicine & Biology, 28(10), 1301-1310. doi:10.1016/s0301-5629(02)00620-8Cheng, H. D., Shan, J., Ju, W., Guo, Y., & Zhang, L. (2010). Automated breast cancer detection and classification using ultrasound images: A survey. Pattern Recognition, 43(1), 299-317. doi:10.1016/j.patcog.2009.05.012Chan, H.-P., Wei, D., Helvie, M. A., Sahiner, B., Adler, D. D., Goodsitt, M. M., & Petrick, N. (1995). Computer-aided classification of mammographic masses and normal tissue: linear discriminant analysis in texture feature space. Physics in Medicine and Biology, 40(5), 857-876. doi:10.1088/0031-9155/40/5/010Tanaka, T., Torii, S., Kabuta, I., Shimizu, K., & Tanaka, M. (2007). Pattern Classification of Nevus with Texture Analysis. IEEJ Transactions on Electrical and Electronic Engineering, 3(1), 143-150. doi:10.1002/tee.20246Singh, B., Jain, V. K., & Singh, S. (2014). Mammogram Mass Classification Using Support Vector Machine with Texture, Shape Features and Hierarchical Centroid Method. Journal of Medical Imaging and Health Informatics, 4(5), 687-696. doi:10.1166/jmihi.2014.1312Pal, N. R., Bhowmick, B., Patel, S. K., Pal, S., & Das, J. (2008). A multi-stage neural network aided system for detection of microcalcifications in digitized mammograms. Neurocomputing, 71(13-15), 2625-2634. doi:10.1016/j.neucom.2007.06.015Ayer, T., Chen, Q., & Burnside, E. S. (2013). Artificial Neural Networks in Mammography Interpretation and Diagnostic Decision Making. Computational and Mathematical Methods in Medicine, 2013, 1-10. doi:10.1155/2013/832509Sumbaly, R., Vishnusri, N., & Jeyalatha, S. (2014). Diagnosis of Breast Cancer using Decision Tree Data Mining Technique. International Journal of Computer Applications, 98(10), 16-24. doi:10.5120/17219-7456Landwehr, N., Hall, M., & Frank, E. (2005). Logistic Model Trees. Machine Learning, 59(1-2), 161-205. doi:10.1007/s10994-005-0466-3Abdel-Zaher, A. M., & Eldeib, A. M. (2016). Breast cancer classification using deep belief networks. Expert Systems with Applications, 46, 139-144. doi:10.1016/j.eswa.2015.10.015Nishikawa, R. M., Giger, M. L., Doi, K., Metz, C. E., Yin, F.-F., Vyborny, C. J., & Schmidt, R. A. (1994). Effect of case selection on the performance of computer-aided detection schemes. Medical Physics, 21(2), 265-269. doi:10.1118/1.597287Guo, R., Lu, G., Qin, B., & Fei, B. (2018). Ultrasound Imaging Technologies for Breast Cancer Detection and Management: A Review. Ultrasound in Medicine & Biology, 44(1), 37-70. doi:10.1016/j.ultrasmedbio.2017.09.012Kang, C.-C., Wang, W.-J., & Kang, C.-H. (2012). Image segmentation with complicated background by using seeded region growing. AEU - International Journal of Electronics and Communications, 66(9), 767-771. doi:10.1016/j.aeue.2012.01.011Prabusankarlal, K. M., Thirumoorthy, P., & Manavalan, R. (2014). Computer Aided Breast Cancer Diagnosis Techniques in Ultrasound: A Survey. Journal of Medical Imaging and Health Informatics, 4(3), 331-349. doi:10.1166/jmihi.2014.1269Abdallah, Y. M., Elgak, S., Zain, H., Rafiq, M., A. Ebaid, E., & A. Elnaema, A. (2018). Breast cancer detection using image enhancement and segmentation algorithms. Biomedical Research, 29(20). doi:10.4066/biomedicalresearch.29-18-1106K.U, S., & S, G. R. (2016). Objective Quality Assessment of Image Enhancement Methods in Digital Mammography - A Comparative Study. Signal & Image Processing : An International Journal, 7(4), 01-13. doi:10.5121/sipij.2016.7401Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., … Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355-368. doi:10.1016/s0734-189x(87)80186-xPisano, E. D., Zong, S., Hemminger, B. M., DeLuca, M., Johnston, R. E., Muller, K., … Pizer, S. M. (1998). Contrast Limited Adaptive Histogram Equalization image processing to improve the detection of simulated spiculations in dense mammograms. Journal of Digital Imaging, 11(4), 193-200. doi:10.1007/bf03178082Wan, J., Yin, H., Chong, A.-X., & Liu, Z.-H. (2020). Progressive residual networks for image super-resolution. Applied Intelligence, 50(5), 1620-1632. doi:10.1007/s10489-019-01548-8Umehara, K., Ota, J., & Ishida, T. (2017). Super-Resolution Imaging of Mammograms Based on the Super-Resolution Convolutional Neural Network. Open Journal of Medical Imaging, 07(04), 180-195. doi:10.4236/ojmi.2017.74018Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image Super-Resolution Using Deep Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 295-307. doi:10.1109/tpami.2015.2439281Jiang, Y., & Li, J. (2020). Generative Adversarial Network for Image Super-Resolution Combining Texture Loss. Applied Sciences, 10(5), 1729. doi:10.3390/app10051729Schultz, R. R., & Stevenson, R. L. (1994). A Bayesian approach to image expansion for improved definition. IEEE Transactions on Image Processing, 3(3), 233-242. doi:10.1109/83.287017Lei Zhang, & Xiaolin Wu. (2006). An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Transactions on Image Processing, 15(8), 2226-2238. doi:10.1109/tip.2006.877407Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on Image Data Augmentation for Deep Learning. Journal of Big Data, 6(1). doi:10.1186/s40537-019-0197-0Weiss, K., Khoshgoftaar, T. M., & Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3(1). doi:10.1186/s40537-016-0043-6Ling Shao, Fan Zhu, & Xuelong Li. (2015). Transfer Learning for Visual Categorization: A Survey. IEEE Transactions on Neural Networks and Learning Syste
    corecore