3 research outputs found

    Preprocessing of fundus images for detection of diabetic retinopathy

    Get PDF
    In recent years, the lesions detection in fundus image become popular area of research in machine learning. The detection of symptoms in fundus image is typically used in diseases that related to eyes such as diabetic retinopathy where the main symptom is exudates. Symptom detection in fundus image depends on many factor. The common factors are varying contrast condition and the large size of the fundus image that will affect the training process for object detection. Furthermore, color similarity of the features in fundus image and the symptoms also one of the factor, for example the similarity between optics disc and exudates. In this paper, we discuss the different preprocessing stage in order to improve the quality of fundus image to mark the optic disc location for detection of optic disc in future work. We have used several datasets namely Kaggle, DIARETDB1 and DRIMDB datasets in this study. The results that we have achieved in SSIM value, clearly shows that the preprocessing was able to increase the image quality

    Procesado de retinografías basado en Deep Learning para la ayuda al diagnóstico de la Retinopatía Diabética

    Get PDF
    La Retinopatía Diabética (RD) es una complicación de la diabetes y es la causa más frecuente de ceguera en la población laboral activa de los países desarrollados. Sin embargo, cuando se trata de forma precoz, más del 90% de la pérdida de visión se puede prevenir. Las retinografías capturadas durante exámenes oculares regulares son el método estándar para detectar RD. No obstante, el aumento de los casos de diabetes a nivel mundial y la falta de especialistas dificultan el diagnóstico. Las imágenes de fondo de ojo generalmente se obtienen usando cámaras de fondo de ojo en condiciones de luz y ángulos variados. Por lo tanto, estas imágenes son propensas a una iluminación no uniforme, contraste deficiente, bajo brillo y falta de nitidez, lo que provoca imágenes borrosas. Estas imágenes borrosas o con falta de iluminación podrían afectar el diagnóstico clínico. Por lo tanto, mejorar estas imágenes de calidad insuficiente puede ser muy útil para evitar diagnósticos erróneos en sistemas de cribado automáticos o manuales. Recientemente, el aprendizaje automático, especialmente las técnicas basadas en Deep Learning, han supuesto una revolución en el campo de la reconstrucción de imágenes. Por ello, en este trabajo, se propone un método de mejora de calidad de retinografías basado en redes de generativas antagónicas (Generative Adversarial Network, GAN). El modelo está formado por dos redes neuronales convolucionales: una red neuronal que actúa como generador de imágenes sintéticas con el objetivo de engañar a una red discriminadora que está entrenada para distinguir las imágenes generadas de alta calidad de las imágenes reales. Este modelo puede funcionar con imágenes de gran resolución, lo que lo hace ampliamente beneficioso para las imágenes clínicas. En este trabajo, la mejora de calidad de la imagen de fondo de ojo abarca una fase de corrección de la nitidez y una segunda fase de corrección de la iluminación. Para el desarrollo y validación del método propuesto, se utilizó una base de datos propia de 1000 imágenes. Dichas imágenes se dividieron en un conjunto de entrenamiento con 800 imágenes de entrenamiento y un conjunto de test con 200 imágenes, de las cuales la mitad tenían calidad insuficiente para su análisis. Sobre ellas, se aplicó un método con varias etapas. En primer lugar, se abordó la mejora de imágenes borrosas empleando una red profunda de tipo GAN. En segundo lugar, se abordó la mejora de imágenes con falta de iluminación, también a través de una red GAN. Cualitativamente, los resultados obtenidos son satisfactorios. Asimismo, se abordó la evaluación cuantitativa de los resultados desde dos perspectivas: evaluación con referencia y evaluación sin referencia. Para la evaluación sin referencia, se utilizan las métricas Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), Natural Image Quality Evaluator (NIQE) y entropía. En cuanto a la evaluación con una imagen de referencia, se utilizaron la relación señal a ruido (Peak Signal-to-Noise Ratio, PSNR) y el índice de similitud estructural (Structural Similarity Index Measure, SSIM). La evaluación con referencia sirve como guía para comparar las imágenes de buena calidad que han sido degradadas intencionadamente. Por otra parte, la evaluación sin referencia es necesaria para evaluar la mejora que el método produce sobre imágenes de mala calidad ya que, de partida, no se dispone de una versión de buena calidad de dichas imágenes. En la fase de mejora de nitidez y sobre las imágenes de test buena calidad, los resultados obtenidos muestran una mejora del 6.22%, 3.33% y 3.26% en términos de PSNR, SSIM y entropía, respectivamente. No obstante, las medidas BRISQUE y NIQE no presentan una mejora. En esta misma etapa, pero sobre las imágenes de test mala calidad los resultados muestran un 31.80%, 4.27% y 3.89% de mejora en términos de BRISQUE, NIQE y entropía respecto a la imagen original real. Asimismo, en la fase de mejora de imágenes con falta de iluminación, los resultados sobre el conjunto de imágenes de buena calidad muestran una mejora del 156.81%, 14.59%, 3.12% y 2.28% en términos de PSNR, SSIM, BRISQUE y NIQE; mientras que la entropía no presenta una mejoría. En esta fase, y sobre el conjunto de imágenes de mala calidad los resultados reflejan una mejora del 50.62% y un 8.33% en términos de BRISQUE y entropía. Sin embargo, en este grupo de imágenes, la medida NIQE no mejora. Finalmente, se ha llevado a cabo un último experimento con ambas redes en serie. En primer lugar, las imágenes atraviesan la red que corrige la iluminación, y posteriormente se corrige su nitidez con la segunda red. Sobre las imágenes de test de buena calidad se ha conseguido un 4.84%, 5.68%, 3.38% y 2.57% de mejora respecto de la imagen original en términos de PSNR, SSIM, NIQE y entropía, aunque no se observa mejora en términos de BRISQUE. En este último experimento, y sobre las imágenes de test de mala calidad se ha obtenido un 88.95%, 21.17% y 2.46% de mejora en términos de BRISQUE, NIQE y entropía. Los resultados obtenidos muestran que el método propuesto podría ser utilizado como primera etapa dentro de sistemas automáticos de análisis de retinografías para la ayuda al diagnóstico de diversas enfermedades oculares.Diabetic Retinopathy (DR) is a complication of diabetes and the leading cause of blindness worldwide. However, when treated early, more than 90% of vision loss can be prevented. Color fundus photography has been the standard method for detecting DR. However, the growing incidence of diabetes and the lack of specialists make diagnosis difficult. Fundus images are generally obtained using fundus cameras in varied light conditions and angles. Thence, these images are prone to non-uniform illumination, poor contrast, low brightness and lack of sharpness resulting in blurry images. These blurry or poor illuminated images could affect clinical diagnosis. Therefore, improving these poor-quality images can be very helpful in avoiding misdiagnosis in automatic or manual screening systems. Recently, machine learning, especially deep learning techniques, have brought revolution to image super resolution reconstruction. For this reason, in this work, we propose a retinal fundus image enhancement method based on Generative Adversarial Networks (GAN). The model is composed of two convolutional neural networks: a neural network that acts as a generator of synthetic images with the aim of tricking a discriminating network that is trained to distinguish high-quality generated images from real images. This model can work with high resolution images, which makes it widely beneficial for clinical images. In this work, the fundus image enhancement method includes both the sharpness correction and the lighting correction. The proposed technique was evaluated in a proprietary database of 200 images, of which half were of insufficient quality. A method with several stages was applied to them. Firstly, blurry image enhancement was addressed by a GAN network. Secondly, the improvement of images with lack of lighting was addressed, also through a GAN network. To evaluate the retinal image enhancement performance, visual and quantitative evaluation were carried out. Two kinds of image quality assessment were adopted: full-reference and no-reference evaluation. For no-reference assessment, Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), Natural Image Quality Evaluator (NIQE) and Entropy were chosen to assess each enhanced image and its original blurry retinal image. As to full-reference assessment, Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) were used. SSIM and PSNR give the comparison between the enhanced image and the original image. Quantitatively, in the blurred image improvement phase using good quality images, the results obtained show that it is possible to achieve an improvement of 6.22%, 3.33% and 3.26% in terms of PSNR, SSIM and entropy. However, the BRISQUE and NIQE measures do not show an improvement. In this same stage, but on the images of poor quality, the results show a 31.80%, 4.27% and 3.89% improvement in terms of BRISQUE, NIQE and entropy with respect to the real original image. Likewise, in the improvement phase of images with lack of lighting, the results on the set of good quality images show an improvement of 156.81%, 14.59%, 3.12% and 2.28% in terms of PSNR, SSIM, BRISQUE and NIQE; while entropy does not improve. In this phase, using the set of poor-quality images, the results reflect an improvement of 50.62% and 8.33% in terms of BRISQUE and entropy. However, in this group of images the NIQE measure does not improve. Finally, a last experiment was carried out with both networks. First, the images passed through the GAN network that corrected their lighting, and then their sharpness was corrected with the second GAN network. On the good quality test images, the results obtained show an improvement of 4.84%, 5.68%, 3.38% and 2.57% in terms of PSNR, SSIM, NIQE and entropy, although the BRISQUE measure does not improve. In this last experiment, and on the poor-quality test images, the results show an improvement of 88.95%, 21.17% and 2.46% in terms of BRISQUE, NIQE and entropy. The results indicate that the proposed method could be used as a first stage in automatic retinography analysis systems to aid in the diagnosis of various eye diseases.Grado en Ingeniería de Tecnologías de Telecomunicació

    Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images

    Full text link
    [EN] Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness through project DPI2016-77869 and GVA through project PROMETEO/2019/109Colomer, A.; Igual García, J.; Naranjo Ornedo, V. (2020). Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors. 20(4):1-20. https://doi.org/10.3390/s20041005S120204World Report on Vision. Technical Report, 2019https://www.who.int/publications-detail/world-report-on-visionFong, D. S., Aiello, L., Gardner, T. W., King, G. L., Blankenship, G., Cavallerano, J. D., … Klein, R. (2003). Retinopathy in Diabetes. Diabetes Care, 27(Supplement 1), S84-S87. doi:10.2337/diacare.27.2007.s84COGAN, D. G. (1961). Retinal Vascular Patterns. Archives of Ophthalmology, 66(3), 366. doi:10.1001/archopht.1961.00960010368014Wilkinson, C. ., Ferris, F. L., Klein, R. E., Lee, P. P., Agardh, C. D., Davis, M., … Verdaguer, J. T. (2003). Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology, 110(9), 1677-1682. doi:10.1016/s0161-6420(03)00475-5Universal Eye Health: A Global Action Plan 2014–2019. Technical Reporthttps://www.who.int/blindness/actionplan/en/Salamat, N., Missen, M. M. S., & Rashid, A. (2019). Diabetic retinopathy techniques in retinal images: A review. Artificial Intelligence in Medicine, 97, 168-188. doi:10.1016/j.artmed.2018.10.009Qureshi, I., Ma, J., & Shaheed, K. (2019). A Hybrid Proposed Fundus Image Enhancement Framework for Diabetic Retinopathy. Algorithms, 12(1), 14. doi:10.3390/a12010014Morales, S., Engan, K., Naranjo, V., & Colomer, A. (2017). Retinal Disease Screening Through Local Binary Patterns. IEEE Journal of Biomedical and Health Informatics, 21(1), 184-192. doi:10.1109/jbhi.2015.2490798Asiri, N., Hussain, M., Al Adel, F., & Alzaidi, N. (2019). Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artificial Intelligence in Medicine, 99, 101701. doi:10.1016/j.artmed.2019.07.009Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., … Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. doi:10.1001/jama.2016.17216Prentašić, P., & Lončarić, S. (2016). Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Computer Methods and Programs in Biomedicine, 137, 281-292. doi:10.1016/j.cmpb.2016.09.018Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abramoff, M., Mendonca, A. M., & Campilho, A. (2018). End-to-End Adversarial Retinal Image Synthesis. IEEE Transactions on Medical Imaging, 37(3), 781-791. doi:10.1109/tmi.2017.2759102De la Torre, J., Valls, A., & Puig, D. (2020). A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing, 396, 465-476. doi:10.1016/j.neucom.2018.07.102Diaz-Pinto, A., Colomer, A., Naranjo, V., Morales, S., Xu, Y., & Frangi, A. F. (2019). Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 38(9), 2211-2218. doi:10.1109/tmi.2019.2903434Walter, T., Klein, J., Massin, P., & Erginay, A. (2002). A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Transactions on Medical Imaging, 21(10), 1236-1243. doi:10.1109/tmi.2002.806290Welfer, D., Scharcanski, J., & Marinho, D. R. (2010). A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Computerized Medical Imaging and Graphics, 34(3), 228-235. doi:10.1016/j.compmedimag.2009.10.001Mookiah, M. R. K., Acharya, U. R., Martis, R. J., Chua, C. K., Lim, C. M., Ng, E. Y. K., & Laude, A. (2013). Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach. Knowledge-Based Systems, 39, 9-22. doi:10.1016/j.knosys.2012.09.008Zhang, X., Thibault, G., Decencière, E., Marcotegui, B., Laÿ, B., Danno, R., … Erginay, A. (2014). Exudate detection in color retinal images for mass screening of diabetic retinopathy. Medical Image Analysis, 18(7), 1026-1043. doi:10.1016/j.media.2014.05.004Sopharak, A., Uyyanonvara, B., Barman, S., & Williamson, T. H. (2008). Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Computerized Medical Imaging and Graphics, 32(8), 720-727. doi:10.1016/j.compmedimag.2008.08.009Giancardo, L., Meriaudeau, F., Karnowski, T. P., Li, Y., Garg, S., Tobin, K. W., & Chaum, E. (2012). Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Medical Image Analysis, 16(1), 216-226. doi:10.1016/j.media.2011.07.004Amel, F., Mohammed, M., & Abdelhafid, B. (2012). Improvement of the Hard Exudates Detection Method Used For Computer- Aided Diagnosis of Diabetic Retinopathy. International Journal of Image, Graphics and Signal Processing, 4(4), 19-27. doi:10.5815/ijigsp.2012.04.03Usman Akram, M., Khalid, S., Tariq, A., Khan, S. A., & Azam, F. (2014). Detection and classification of retinal lesions for grading of diabetic retinopathy. Computers in Biology and Medicine, 45, 161-171. doi:10.1016/j.compbiomed.2013.11.014Akram, M. U., Tariq, A., Khan, S. A., & Javed, M. Y. (2014). Automated detection of exudates and macula for grading of diabetic macular edema. Computer Methods and Programs in Biomedicine, 114(2), 141-152. doi:10.1016/j.cmpb.2014.01.010Quellec, G., Lamard, M., Abràmoff, M. D., Decencière, E., Lay, B., Erginay, A., … Cazuguel, G. (2012). A multiple-instance learning framework for diabetic retinopathy screening. Medical Image Analysis, 16(6), 1228-1240. doi:10.1016/j.media.2012.06.003Decencière, E., Cazuguel, G., Zhang, X., Thibault, G., Klein, J.-C., Meyer, F., … Chabouis, A. (2013). TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM, 34(2), 196-203. doi:10.1016/j.irbm.2013.01.010Abràmoff, M. D., Folk, J. C., Han, D. P., Walker, J. D., Williams, D. F., Russell, S. R., … Niemeijer, M. (2013). Automated Analysis of Retinal Images for Detection of Referable Diabetic Retinopathy. JAMA Ophthalmology, 131(3), 351. doi:10.1001/jamaophthalmol.2013.1743Almotiri, J., Elleithy, K., & Elleithy, A. (2018). Retinal Vessels Segmentation Techniques and Algorithms: A Survey. Applied Sciences, 8(2), 155. doi:10.3390/app8020155Thakur, N., & Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. doi:10.1016/j.bspc.2018.01.014Bertalmio, M., Sapiro, G., Caselles, V., & Ballester, C. (2000). Image inpainting. Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’00. doi:10.1145/344779.344972Qureshi, M. A., Deriche, M., Beghdadi, A., & Amin, A. (2017). A critical survey of state-of-the-art image inpainting quality assessment metrics. Journal of Visual Communication and Image Representation, 49, 177-191. doi:10.1016/j.jvcir.2017.09.006Colomer, A., Naranjo, V., Engan, K., & Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication, 59, 73-82. doi:10.1016/j.image.2017.03.018Morales, S., Naranjo, V., Angulo, J., & Alcaniz, M. (2013). Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Transactions on Medical Imaging, 32(4), 786-796. doi:10.1109/tmi.2013.2238244Ojala, T., Pietikäinen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 29(1), 51-59. doi:10.1016/0031-3203(95)00067-4Ojala, T., Pietikainen, M., & Maenpaa, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971-987. doi:10.1109/tpami.2002.1017623Breiman, L. (2001). Machine Learning, 45(1), 5-32. doi:10.1023/a:1010933404324Chang, C.-C., & Lin, C.-J. (2011). LIBSVM. ACM Transactions on Intelligent Systems and Technology, 2(3), 1-27. doi:10.1145/1961189.1961199Tapia, S. L., Molina, R., & de la Blanca, N. P. (2016). Detection and localization of objects in Passive Millimeter Wave Images. 2016 24th European Signal Processing Conference (EUSIPCO). doi:10.1109/eusipco.2016.7760619Jin Huang, & Ling, C. X. (2005). Using AUC and accuracy in evaluating learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 17(3), 299-310. doi:10.1109/tkde.2005.50Prati, R. C., Batista, G. E. A. P. A., & Monard, M. C. (2011). A Survey on Graphical Methods for Classification Predictive Performance Evaluation. IEEE Transactions on Knowledge and Data Engineering, 23(11), 1601-1618. doi:10.1109/tkde.2011.59Mandrekar, J. N. (2010). Receiver Operating Characteristic Curve in Diagnostic Test Assessment. Journal of Thoracic Oncology, 5(9), 1315-1316. doi:10.1097/jto.0b013e3181ec173dRocha, A., Carvalho, T., Jelinek, H. F., Goldenstein, S., & Wainer, J. (2012). Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection. IEEE Transactions on Biomedical Engineering, 59(8), 2244-2253. doi:10.1109/tbme.2012.2201717Júnior, S. B., & Welfer, D. (2013). Automatic Detection of Microaneurysms and Hemorrhages in Color Eye Fundus Images. International Journal of Computer Science and Information Technology, 5(5), 21-37. doi:10.5121/ijcsit.2013.550
    corecore