294 research outputs found

    Comparison of Local Analysis Strategies for Exudate Detection in Fundus Images

    Full text link
    Diabetic Retinopathy (DR) is a severe and widely spread eye disease. Exudates are one of the most prevalent signs during the early stage of DR and an early detection of these lesions is vital to prevent the patient’s blindness. Hence, detection of exudates is an important diagnostic task of DR, in which computer assistance may play a major role. In this paper, a system based on local feature extraction and Support Vector Machine (SVM) classification is used to develop and compare different strategies for automated detection of exudates. The main novelty of this work is allowing the detection of exudates using non-regular regions to perform the local feature extraction. To accomplish this objective, different methods for generating superpixels are applied to the fundus images of E-OPHTA database and texture and morphological features are extracted for each of the resulting regions. An exhaustive comparison among the proposed methods is also carried out.This paper was supported by the European Union’s Horizon 2020 research and innovation programme under the Project GALAHAD [H2020-ICT2016-2017, 732613]. The work of Adri´an Colomer has been supported by the Spanish Government under a FPI Grant [BES-2014-067889]. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.Pereira, J.; Colomer, A.; Naranjo Ornedo, V. (2018). Comparison of Local Analysis Strategies for Exudate Detection in Fundus Images. En Intelligent Data Engineering and Automated Learning – IDEAL 2018. Springer. 174-183. https://doi.org/10.1007/978-3-030-03493-1_19S174183Sidibé, D., Sadek, I., Mériaudeau, F.: Discrimination of retinal images containing bright lesions using sparse coded features and SVM. Comput. Biol. Med. 62, 175–184 (2015)Zhou, W., Wu, C., Yi, Y., Du, W.: Automatic detection of exudates in digital color fundus images using superpixel multi-feature classification. IEEE Access 5, 17077–17088 (2017)Sinthanayothin, C., et al.: Automated detection of diabetic retinopathy on digital fundus images. Diabet. Med. 19(2), 105–112 (2002)Walter, T., Klein, J.C., et al.: A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 21(10), 1236–1243 (2002)Ali, S., et al.: Statistical atlas based exudate segmentation. Comput. Med. Imaging Graph. 37(5–6), 358–368 (2013)Zhang, X., Thibault, G., Decencière, E., Marcotegui, B., et al.: Exudate detection in color retinal images for mass screening of diabetic retinopathy. Med. Image Anal. 18(7), 1026–1043 (2014)Li, H., Chutatape, O.: Automated feature extraction in color retinal images by a model based approach. IEEE Trans. Biomed. Eng. 51(2), 246–254 (2004)Welfer, D., Scharcanski, J., Marinho, D.R.: A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Comput. Med. Imaging Graph. 34(3), 228–235 (2010)Giancardo, L., et al.: Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Med. Image Anal. 16(1), 216–226 (2012)Amel, F., Mohammed, M., Abdelhafid, B.: Improvement of the hard exudates detection method used for computer-aided diagnosis of diabetic retinopathy. Int. J. Image Graph. Signal Process. 4(4), 19 (2012)Akram, M.U., Khalid, S., Tariq, A., Khan, S.A., Azam, F.: Detection and classification of retinal lesions for grading of diabetic retinopathy. Comput. Biol. Med. 45, 161–171 (2014)Akram, M.U., Tariq, A., Khan, S.A., Javed, M.Y.: Automated detection of exudates and macula for grading of diabetic macular edema. Comput. Methods Programs Biomed. 114(2), 141–152 (2014)Machairas, V.: Waterpixels and their application to image segmentation learning. Ph.D. thesis, Université de recherche Paris Sciences et Lettres (2016)Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)Veksler, O., Boykov, Y., Mehrani, P.: Superpixels and supervoxels in an energy optimization framework. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 211–224. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15555-0_16Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)Levinshtein, A., Stere, A., Kutulakos, K.N., Fleet, D.J., Dickinson, S.J., Siddiqi, K.: TurboPixels: fast superpixels using geometric flows. IEEE Trans. Pattern Anal. Mach. Intell. 31(12), 2290–2297 (2009)Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)Machairas, V., Faessel, M., Cárdenas-Peña, D., Chabardes, T., Walter, T., Decencière, E.: Waterpixels. IEEE Trans. Image Process. 24(11), 3707–3716 (2015)Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)Guo, Z., Zhang, L., Zhang, D.: Rotation invariant texture classification using LBP variance (LBPV) with global matching. Pattern Recognit. 43(3), 706–719 (2010)Morales, S., Naranjo, V., Angulo, J., Alcañiz, M.: Automatic detection of optic disc based on PCA and mathematical morphology. IEEE Trans. Med. Imaging 32(4), 786–796 (2013)Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Decencière, E., Cazuguel, G., Zhang, X., Thibault, G., Klein, J.C., Meyer, F., et al.: TeleOphta: machine learning and image processing methods for teleophthalmology. IRBM 34(2), 196–203 (2013)DErrico, J.: inpaint\_nans, matlab central file exchange (2004). http://kr.mathworks.com/matlabcentral/fileexchange/4551-inpaint-nans . Accessed 13 Aug 201

    Deep learning analysis of eye fundus images to support medical diagnosis

    Get PDF
    Machine learning techniques have been successfully applied to support medical decision making of cancer, heart diseases and degenerative diseases of the brain. In particular, deep learning methods have been used for early detection of abnormalities in the eye that could improve the diagnosis of different ocular diseases, especially in developing countries, where there are major limitations to access to specialized medical treatment. However, the early detection of clinical signs such as blood vessel, optic disc alterations, exudates, hemorrhages, drusen, and microaneurysms presents three main challenges: the ocular images can be affected by noise artifact, the features of the clinical signs depend specifically on the acquisition source, and the combination of local signs and grading disease label is not an easy task. This research approaches the problem of combining local signs and global labels of different acquisition sources of medical information as a valuable tool to support medical decision making in ocular diseases. Different models for different eye diseases were developed. Four models were developed using eye fundus images: for DME, it was designed a two-stages model that uses a shallow model to predict an exudate binary mask. Then, the binary mask is stacked with the raw fundus image into a 4-channel array as an input of a deep convolutional neural network for diabetic macular edema diagnosis; for glaucoma, it was developed three deep learning models. First, it was defined a deep learning model based on three-stages that contains an initial stage for automatically segment two binary masks containing optic disc and physiological cup segmentation, followed by an automatic morphometric features extraction stage from previous segmentations, and a final classification stage that supports the glaucoma diagnosis with intermediate medical information. Two late-data-fusion methods that fused morphometric features from cartesian and polar segmentation of the optic disc and physiological cup with features extracted from raw eye fundus images. On the other hand, two models were defined using optical coherence tomography. First, a customized convolutional neural network termed as OCT-NET to extract features from OCT volumes to classify DME, DR-DME and AMD conditions. In addition, this model generates images with highlighted local information about the clinical signs, and it estimates the number of slides inside a volume with local abnormalities. Finally, a 3D-Deep learning model that uses OCT volumes as an input to estimate the retinal thickness map useful to grade AMD. The methods were systematically evaluated using ten free public datasets. The methods were compared and validated against other state-of-the-art algorithms and the results were also qualitatively evaluated by ophthalmology experts from Fundación Oftalmológica Nacional. In addition, the proposed methods were tested as a diagnosis support tool of diabetic macular edema, glaucoma, diabetic retinopathy and age-related macular degeneration using two different ocular imaging representations. Thus, we consider that this research could be potentially a big step in building telemedicine tools that could support medical personnel for detecting ocular diseases using eye fundus images and optical coherence tomography.Las técnicas de aprendizaje automático se han aplicado con éxito para apoyar la toma de decisiones médicas sobre el cáncer, las enfermedades cardíacas y las enfermedades degenerativas del cerebro. En particular, se han utilizado métodos de aprendizaje profundo para la detección temprana de anormalidades en el ojo que podrían mejorar el diagnóstico de diferentes enfermedades oculares, especialmente en países en desarrollo, donde existen grandes limitaciones para acceder a tratamiento médico especializado. Sin embargo, la detección temprana de signos clínicos como vasos sanguíneos, alteraciones del disco óptico, exudados, hemorragias, drusas y microaneurismas presenta tres desafíos principales: las imágenes oculares pueden verse afectadas por artefactos de ruido, las características de los signos clínicos dependen específicamente de fuente de adquisición, y la combinación de signos locales y clasificación de la enfermedad no es una tarea fácil. Esta investigación aborda el problema de combinar signos locales y etiquetas globales de diferentes fuentes de adquisición de información médica como una herramienta valiosa para apoyar la toma de decisiones médicas en enfermedades oculares. Se desarrollaron diferentes modelos para diferentes enfermedades oculares. Se desarrollaron cuatro modelos utilizando imágenes de fondo de ojo: para DME, se diseñó un modelo de dos etapas que utiliza un modelo superficial para predecir una máscara binaria de exudados. Luego, la máscara binaria se apila con la imagen de fondo de ojo original en una matriz de 4 canales como entrada de una red neuronal convolucional profunda para el diagnóstico de edema macular diabético; para el glaucoma, se desarrollaron tres modelos de aprendizaje profundo. Primero, se definió un modelo de aprendizaje profundo basado en tres etapas que contiene una etapa inicial para segmentar automáticamente dos máscaras binarias que contienen disco óptico y segmentación fisiológica de la copa, seguido de una etapa de extracción de características morfométricas automáticas de segmentaciones anteriores y una etapa de clasificación final que respalda el diagnóstico de glaucoma con información médica intermedia. Dos métodos de fusión de datos tardíos que fusionaron características morfométricas de la segmentación cartesiana y polar del disco óptico y la copa fisiológica con características extraídas de imágenes de fondo de ojo crudo. Por otro lado, se definieron dos modelos mediante tomografía de coherencia óptica. Primero, una red neuronal convolucional personalizada denominada OCT-NET para extraer características de los volúmenes OCT para clasificar las condiciones DME, DR-DME y AMD. Además, este modelo genera imágenes con información local resaltada sobre los signos clínicos, y estima el número de diapositivas dentro de un volumen con anomalías locales. Finalmente, un modelo de aprendizaje 3D-Deep que utiliza volúmenes OCT como entrada para estimar el mapa de espesor retiniano útil para calificar AMD. Los métodos se evaluaron sistemáticamente utilizando diez conjuntos de datos públicos gratuitos. Los métodos se compararon y validaron con otros algoritmos de vanguardia y los resultados también fueron evaluados cualitativamente por expertos en oftalmología de la Fundación Oftalmológica Nacional. Además, los métodos propuestos se probaron como una herramienta de diagnóstico de edema macular diabético, glaucoma, retinopatía diabética y degeneración macular relacionada con la edad utilizando dos representaciones de imágenes oculares diferentes. Por lo tanto, consideramos que esta investigación podría ser potencialmente un gran paso en la construcción de herramientas de telemedicina que podrían ayudar al personal médico a detectar enfermedades oculares utilizando imágenes de fondo de ojo y tomografía de coherencia óptica.Doctorad

    A GPU-based Evolution Strategy for Optic Disk Detection in Retinal Images

    Get PDF
    La ejecución paralela de aplicaciones usando unidades de procesamiento gráfico (gpu) ha ganado gran interés en la comunidad académica en los años recientes. La computación paralela puede ser aplicada a las estrategias evolutivas para procesar individuos dentro de una población, sin embargo, las estrategias evolutivas se caracterizan por un significativo consumo de recursos computacionales al resolver problemas de gran tamaño o aquellos que se modelan mediante funciones de aptitud complejas. Este artículo describe la implementación de una estrategia evolutiva para la detección del disco óptico en imágenes de retina usando Compute Unified Device Architecture (cuda). Los resultados experimentales muestran que el tiempo de ejecución para la detección del disco óptico logra una aceleración de 5 a 7 veces, comparado con la ejecución secuencial en una cpu convencional.Parallel processing using graphic processing units (GPUs) has attracted much research interest in recent years. Parallel computation can be applied to evolution strategy (ES) for processing individuals in a population, but evolutionary strategies are time consuming to solve large computational problems or complex fitness functions. In this paper we describe the implementation of an improved ES for optic disk detection in retinal images using the Compute Unified Device Architecture (CUDA) environment. In the experimental results we show that the computational time for optic disk detection task has a speedup factor of 5x and 7x compared to an implementation on a mainstream CPU

    Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images

    Full text link
    [EN] Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness through project DPI2016-77869 and GVA through project PROMETEO/2019/109Colomer, A.; Igual García, J.; Naranjo Ornedo, V. (2020). Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors. 20(4):1-20. https://doi.org/10.3390/s20041005S120204World Report on Vision. Technical Report, 2019https://www.who.int/publications-detail/world-report-on-visionFong, D. S., Aiello, L., Gardner, T. W., King, G. L., Blankenship, G., Cavallerano, J. D., … Klein, R. (2003). Retinopathy in Diabetes. Diabetes Care, 27(Supplement 1), S84-S87. doi:10.2337/diacare.27.2007.s84COGAN, D. G. (1961). Retinal Vascular Patterns. Archives of Ophthalmology, 66(3), 366. doi:10.1001/archopht.1961.00960010368014Wilkinson, C. ., Ferris, F. L., Klein, R. E., Lee, P. P., Agardh, C. D., Davis, M., … Verdaguer, J. T. (2003). Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology, 110(9), 1677-1682. doi:10.1016/s0161-6420(03)00475-5Universal Eye Health: A Global Action Plan 2014–2019. Technical Reporthttps://www.who.int/blindness/actionplan/en/Salamat, N., Missen, M. M. S., & Rashid, A. (2019). Diabetic retinopathy techniques in retinal images: A review. Artificial Intelligence in Medicine, 97, 168-188. doi:10.1016/j.artmed.2018.10.009Qureshi, I., Ma, J., & Shaheed, K. (2019). A Hybrid Proposed Fundus Image Enhancement Framework for Diabetic Retinopathy. Algorithms, 12(1), 14. doi:10.3390/a12010014Morales, S., Engan, K., Naranjo, V., & Colomer, A. (2017). Retinal Disease Screening Through Local Binary Patterns. IEEE Journal of Biomedical and Health Informatics, 21(1), 184-192. doi:10.1109/jbhi.2015.2490798Asiri, N., Hussain, M., Al Adel, F., & Alzaidi, N. (2019). Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artificial Intelligence in Medicine, 99, 101701. doi:10.1016/j.artmed.2019.07.009Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., … Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. doi:10.1001/jama.2016.17216Prentašić, P., & Lončarić, S. (2016). Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Computer Methods and Programs in Biomedicine, 137, 281-292. doi:10.1016/j.cmpb.2016.09.018Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abramoff, M., Mendonca, A. M., & Campilho, A. (2018). End-to-End Adversarial Retinal Image Synthesis. IEEE Transactions on Medical Imaging, 37(3), 781-791. doi:10.1109/tmi.2017.2759102De la Torre, J., Valls, A., & Puig, D. (2020). A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing, 396, 465-476. doi:10.1016/j.neucom.2018.07.102Diaz-Pinto, A., Colomer, A., Naranjo, V., Morales, S., Xu, Y., & Frangi, A. F. (2019). Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 38(9), 2211-2218. doi:10.1109/tmi.2019.2903434Walter, T., Klein, J., Massin, P., & Erginay, A. (2002). A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Transactions on Medical Imaging, 21(10), 1236-1243. doi:10.1109/tmi.2002.806290Welfer, D., Scharcanski, J., & Marinho, D. R. (2010). A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Computerized Medical Imaging and Graphics, 34(3), 228-235. doi:10.1016/j.compmedimag.2009.10.001Mookiah, M. R. K., Acharya, U. R., Martis, R. J., Chua, C. K., Lim, C. M., Ng, E. Y. K., & Laude, A. (2013). Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach. Knowledge-Based Systems, 39, 9-22. doi:10.1016/j.knosys.2012.09.008Zhang, X., Thibault, G., Decencière, E., Marcotegui, B., Laÿ, B., Danno, R., … Erginay, A. (2014). Exudate detection in color retinal images for mass screening of diabetic retinopathy. Medical Image Analysis, 18(7), 1026-1043. doi:10.1016/j.media.2014.05.004Sopharak, A., Uyyanonvara, B., Barman, S., & Williamson, T. H. (2008). Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Computerized Medical Imaging and Graphics, 32(8), 720-727. doi:10.1016/j.compmedimag.2008.08.009Giancardo, L., Meriaudeau, F., Karnowski, T. P., Li, Y., Garg, S., Tobin, K. W., & Chaum, E. (2012). Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Medical Image Analysis, 16(1), 216-226. doi:10.1016/j.media.2011.07.004Amel, F., Mohammed, M., & Abdelhafid, B. (2012). Improvement of the Hard Exudates Detection Method Used For Computer- Aided Diagnosis of Diabetic Retinopathy. International Journal of Image, Graphics and Signal Processing, 4(4), 19-27. doi:10.5815/ijigsp.2012.04.03Usman Akram, M., Khalid, S., Tariq, A., Khan, S. A., & Azam, F. (2014). Detection and classification of retinal lesions for grading of diabetic retinopathy. Computers in Biology and Medicine, 45, 161-171. doi:10.1016/j.compbiomed.2013.11.014Akram, M. U., Tariq, A., Khan, S. A., & Javed, M. Y. (2014). Automated detection of exudates and macula for grading of diabetic macular edema. Computer Methods and Programs in Biomedicine, 114(2), 141-152. doi:10.1016/j.cmpb.2014.01.010Quellec, G., Lamard, M., Abràmoff, M. D., Decencière, E., Lay, B., Erginay, A., … Cazuguel, G. (2012). A multiple-instance learning framework for diabetic retinopathy screening. Medical Image Analysis, 16(6), 1228-1240. doi:10.1016/j.media.2012.06.003Decencière, E., Cazuguel, G., Zhang, X., Thibault, G., Klein, J.-C., Meyer, F., … Chabouis, A. (2013). TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM, 34(2), 196-203. doi:10.1016/j.irbm.2013.01.010Abràmoff, M. D., Folk, J. C., Han, D. P., Walker, J. D., Williams, D. F., Russell, S. R., … Niemeijer, M. (2013). Automated Analysis of Retinal Images for Detection of Referable Diabetic Retinopathy. JAMA Ophthalmology, 131(3), 351. doi:10.1001/jamaophthalmol.2013.1743Almotiri, J., Elleithy, K., & Elleithy, A. (2018). Retinal Vessels Segmentation Techniques and Algorithms: A Survey. Applied Sciences, 8(2), 155. doi:10.3390/app8020155Thakur, N., & Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. doi:10.1016/j.bspc.2018.01.014Bertalmio, M., Sapiro, G., Caselles, V., & Ballester, C. (2000). Image inpainting. Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’00. doi:10.1145/344779.344972Qureshi, M. A., Deriche, M., Beghdadi, A., & Amin, A. (2017). A critical survey of state-of-the-art image inpainting quality assessment metrics. Journal of Visual Communication and Image Representation, 49, 177-191. doi:10.1016/j.jvcir.2017.09.006Colomer, A., Naranjo, V., Engan, K., & Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication, 59, 73-82. doi:10.1016/j.image.2017.03.018Morales, S., Naranjo, V., Angulo, J., & Alcaniz, M. (2013). Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Transactions on Medical Imaging, 32(4), 786-796. doi:10.1109/tmi.2013.2238244Ojala, T., Pietikäinen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 29(1), 51-59. doi:10.1016/0031-3203(95)00067-4Ojala, T., Pietikainen, M., & Maenpaa, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971-987. doi:10.1109/tpami.2002.1017623Breiman, L. (2001). Machine Learning, 45(1), 5-32. doi:10.1023/a:1010933404324Chang, C.-C., & Lin, C.-J. (2011). LIBSVM. ACM Transactions on Intelligent Systems and Technology, 2(3), 1-27. doi:10.1145/1961189.1961199Tapia, S. L., Molina, R., & de la Blanca, N. P. (2016). Detection and localization of objects in Passive Millimeter Wave Images. 2016 24th European Signal Processing Conference (EUSIPCO). doi:10.1109/eusipco.2016.7760619Jin Huang, & Ling, C. X. (2005). Using AUC and accuracy in evaluating learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 17(3), 299-310. doi:10.1109/tkde.2005.50Prati, R. C., Batista, G. E. A. P. A., & Monard, M. C. (2011). A Survey on Graphical Methods for Classification Predictive Performance Evaluation. IEEE Transactions on Knowledge and Data Engineering, 23(11), 1601-1618. doi:10.1109/tkde.2011.59Mandrekar, J. N. (2010). Receiver Operating Characteristic Curve in Diagnostic Test Assessment. Journal of Thoracic Oncology, 5(9), 1315-1316. doi:10.1097/jto.0b013e3181ec173dRocha, A., Carvalho, T., Jelinek, H. F., Goldenstein, S., & Wainer, J. (2012). Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection. IEEE Transactions on Biomedical Engineering, 59(8), 2244-2253. doi:10.1109/tbme.2012.2201717Júnior, S. B., & Welfer, D. (2013). Automatic Detection of Microaneurysms and Hemorrhages in Color Eye Fundus Images. International Journal of Computer Science and Information Technology, 5(5), 21-37. doi:10.5121/ijcsit.2013.550

    Automated Retinal Lesion Detection via Image Saliency Analysis

    Get PDF
    Background and objective:The detection of abnormalities such as lesions or leakage from retinal images is an important health informatics task for automated early diagnosis of diabetic and malarial retinopathy or other eye diseases, in order to prevent blindness and common systematic conditions. In this work, we propose a novel retinal lesion detection method by adapting the concepts of saliency. Methods :Retinal images are firstly segmented as superpixels, two new saliency feature representations: uniqueness and compactness, are then derived to represent the superpixels. The pixel level saliency is then estimated from these superpixel saliency values via a bilateral filter. These extracted saliency features form a matrix for low-rank analysis to achieve saliency detection. The precise contour of a lesion is finally extracted from the generated saliency map after removing confounding structures such as blood vessels, the optic disc, and the fovea. The main novelty of this method is that it is an effective tool for detecting different abnormalities at pixel-level from different modalities of retinal images, without the need to tune parameters. Results:To evaluate its effectiveness, we have applied our method to seven public datasets of diabetic and malarial retinopathy with four different types of lesions: exudate, hemorrhage, microaneurysms, and leakage. The evaluation was undertaken at pixel-level, lesion-level, or image-level according to ground truth availability in these datasets. Conclusions:The experimental results show that the proposed method outperforms existing state-of-the-art ones in applicability, effectiveness, and accuracy
    corecore