12 research outputs found

    Fusion noise-removal technique with modified dark-contrast algorithm for robust segmentation of acute leukemia cell images

    Get PDF
    Segmentation is the major area of interest in the field of image processing stage. In an automatic diagnosis of acute leukemia disease, the crucial process is to achieve the accurate segmentation of acute leukemia blood image. Generally, there are three requirements of image segmentation for medical purposes, namely; accuracy, robustness and effectiveness which have received considerable critical attention. As such, we propose a new (modified) dark contrast enhancement technique to enhance and automatically segment the acute leukemic cells. Subsequently, we used a fusion 7 × 7 median filter as well as the seeded region growing area extraction (SRGAE) algorithm to minimise the salt-and-pepper noise, apart from preserving the post-segmentation edge. As per the outcomes, the accuracy, sensitivity, and specificity of this method were 91.02%, 83.68%, and 91.57% respectively

    Retinal Fundus Image Quality Adjustment with Opponent Color Model

    Get PDF
    āļ§āļīāļ—āļĒāļēāļĻāļēāļŠāļ•āļĢāļĄāļŦāļēāļšāļąāļ“āļ—āļīāļ• (āļ§āļīāļ—āļĒāļēāļāļēāļĢāļ„āļ­āļĄāļžāļīāļ§āđ€āļ•āļ­āļĢāđŒ),2565WHO reported the number of 65 million of Age-related Macular Degeneration (AMD) patients around the world, and expected the number would increase to 300 million patients by the year 2040. Currently, ophthalmologists rely on retinal fundus photographs to analyze AMD lesions. Nevertheless, sometimes the photographs have an unsatisfactory quality such as low contrast, under or over exposure which results in difficulties for the experts to analyze lesions. So, it is suggested to have the unsatisfactory photographs improved to enhance anatomical details appearance before use by the experts. This thesis proposed an effective retinal fundus image simulation modeling to enhance contrast and adjust the color balance. It is aimed to assist ophthalmologists in AMD lesion screening. The proposed method consists of a few steps to achieve the intent. Firstly, an input image is improved contrast with CLAHE technique by using CIE L^* a^* b^* color space. Then, the histogram of the output image from previous step is stretched and rescaled by a scaling histogram technique to adjust its overall brightness offset to meet the Hubbard’s retinal fundus image proper range standard. This thesis used images as experimental data from two datasets, DiaretDB0 and STARE datasets. The results indicate the proposed method yields a highly contrast and color-balance output which fits the Hubbard’s standard and easier to screen lesions.āļāļ­āļ‡āļ—āļļāļ™āļ§āļīāļˆāļąāļĒāļ„āļ“āļ°āļ§āļīāļ—āļĒāļēāļĻāļēāļŠāļ•āļĢāđŒ āļĄāļŦāļēāļ§āļīāļ—āļĒāļēāļĨāļąāļĒāļŠāļ‡āļ‚āļĨāļēāļ™āļ„āļĢāļīāļ™āļ—āļĢāđŒ āļŠāļąāļāļāļēāđ€āļĨāļ‚āļ—āļĩāđˆ 2-2561-02-017āļ­āļ‡āļ„āđŒāļāļēāļĢāļ­āļ™āļēāļĄāļąāļĒāđ‚āļĨāļāļŦāļĢāļ·āļ­ WHO āđ„āļ”āđ‰āļĢāļēāļĒāļ‡āļēāļ™āļˆāļģāļ™āļ§āļ™āļœāļđāđ‰āļ›āđˆāļ§āļĒāđ‚āļĢāļ„āļˆāļļāļ”āļ āļēāļžāļŠāļąāļ”āļ—āļĩāđˆāļˆāļ­āļ•āļēāđ€āļŠāļ·āđˆāļ­āļĄāđƒāļ™āļœāļđāđ‰āļŠāļđāļ‡āļ­āļēāļĒāļļ (AMD) āļˆāļģāļ™āļ§āļ™ 65 āļĨāđ‰āļēāļ™āļĢāļēāļĒāļˆāļēāļāļ—āļąāđˆāļ§āđ‚āļĨāļ āđāļĨāļ°āļ•āļąāļ§āđ€āļĨāļ‚āļˆāļ°āđ€āļžāļīāđˆāļĄāļŠāļđāļ‡āļ‚āļķāđ‰āļ™āļ–āļķāļ‡ 300 āļĨāđ‰āļēāļ™āļĢāļēāļĒāļ āļēāļĒāđƒāļ™āļ›āļĩ āļ„.āļĻ. 2040 āđƒāļ™āļ›āļąāļˆāļˆāļļāļšāļąāļ™āļāļēāļĢāļ§āļīāđ€āļ„āļĢāļēāļ°āļŦāđŒāļĢāļ­āļĒāđ‚āļĢāļ„ AMD āļ™āļąāđ‰āļ™āđƒāļŠāđ‰āļāļēāļĢāļ§āļīāđ€āļ„āļĢāļēāļ°āļŦāđŒāļ āļēāļžāļ–āđˆāļēāļĒāļŠāļĩāļˆāļ­āļ•āļē āļ‹āļķāđˆāļ‡āļšāļēāļ‡āļ„āļĢāļąāđ‰āļ‡āļ āļēāļžāļ–āđˆāļēāļĒāļ—āļĩāđˆāđ„āļ”āđ‰āļ™āļąāđ‰āļ™āļĄāļĩāļ„āļļāļ“āļ āļēāļžāļ—āļĩāđˆāļ•āđˆāļģ āđ€āļŠāđˆāļ™ āļĄāļĩāļ„āļ­āļ™āļ—āļĢāļēāļŠāļ•āđŒāļ—āļĩāđˆāļ•āđˆāļģ āļĄāļĩāđāļŠāļ‡āļ—āļĩāđˆāļĄāļ·āļ”āļŦāļĢāļ·āļ­āļŠāļ§āđˆāļēāļ‡āđ€āļāļīāļ™āđ„āļ› āđ€āļ›āđ‡āļ™āļ•āđ‰āļ™ āļ‹āļķāđˆāļ‡āļˆāļ°āļ—āļģāđƒāļŦāđ‰āļœāļđāđ‰āđ€āļŠāļĩāđˆāļĒāļ§āļŠāļēāļāļ§āļīāđ€āļ„āļĢāļēāļ°āļŦāđŒāļĢāļ­āļĒāđ‚āļĢāļ„āļˆāļēāļāļ āļēāļžāļ–āđˆāļēāļĒāļ™āļąāđ‰āļ™āđ„āļ”āđ‰āļĒāļēāļ āļ”āļąāļ‡āļ™āļąāđ‰āļ™āļˆāļķāļ‡āļˆāļģāđ€āļ›āđ‡āļ™āļ—āļĩāđˆāļˆāļ°āļ•āđ‰āļ­āļ‡āļ—āļģāļāļēāļĢāļ›āļĢāļąāļšāļ›āļĢāļļāļ‡āļ„āļļāļ“āļ āļēāļžāļ‚āļ­āļ‡āļ āļēāļžāļ–āđˆāļēāļĒāđ€āļŦāļĨāđˆāļēāļ™āļĩāđ‰āđ€āļžāļ·āđˆāļ­āļ—āļģāđƒāļŦāđ‰āļĢāļēāļĒāļĨāļ°āđ€āļ­āļĩāļĒāļ”āļ—āļēāļ‡āļāļēāļĒāļ§āļīāļ āļēāļ„āļ”āļĩāļ‚āļķāđ‰āļ™āļāđˆāļ­āļ™āļ—āļĩāđˆāļˆāļ°āđƒāļŦāđ‰āļœāļđāđ‰āđ€āļŠāļĩāđˆāļĒāļ§āļŠāļēāļāļ—āļģāļāļēāļĢāļ§āļīāđ€āļ„āļĢāļēāļ°āļŦāđŒāļĢāļ­āļĒāđ‚āļĢāļ„āļ•āđˆāļ­āđ„āļ› āļ‡āļēāļ™āļ§āļīāļˆāļąāļĒāļŠāļīāđ‰āļ™āļ™āļĩāđ‰āļˆāļķāļ‡āļ™āļģāđ€āļŠāļ™āļ­āđāļšāļšāļˆāļģāļĨāļ­āļ‡āđ€āļžāļ·āđˆāļ­āļāļēāļĢāļ›āļĢāļąāļšāļ›āļĢāļļāļ‡āļ„āļ­āļ™āļ—āļĢāļēāļŠāļ•āđŒāđāļĨāļ°āļŠāļĄāļ”āļļāļĨāļŠāļĩāļ‚āļ­āļ‡āļ āļēāļžāļ–āđˆāļēāļĒāļŠāļĩāļˆāļ­āļ•āļēāļ—āļĩāđˆāļĄāļĩāļ›āļĢāļ°āļŠāļīāļ—āļ˜āļīāļ āļēāļž āđ€āļ›āđ‰āļēāļŦāļĄāļēāļĒāļ‚āļ­āļ‡āļ‡āļēāļ™āļ§āļīāļˆāļąāļĒāļŠāļīāđ‰āļ™āļ™āļĩāđ‰āđ€āļžāļ·āđˆāļ­āļŠāđˆāļ§āļĒāļœāļđāđ‰āđ€āļŠāļĩāđˆāļĒāļ§āļŠāļēāļāļ„āļąāļ”āļāļĢāļ­āļ‡āļĢāļ­āļĒāđ‚āļĢāļ„ AMD āđ‚āļ”āļĒāļ‚āļąāđ‰āļ™āļ•āļ­āļ™āļ§āļīāļ˜āļĩāļ‚āļ­āļ‡āļ‡āļēāļ™āļ§āļīāļˆāļąāļĒāļ›āļĢāļ°āļāļ­āļšāļ”āđ‰āļ§āļĒ āļ‚āļąāđ‰āļ™āļ•āļ­āļ™āđāļĢāļāļˆāļ°āļ—āļģāļāļēāļĢāļ›āļĢāļąāļšāļ›āļĢāļļāļ‡āļ„āļ­āļ™āļ—āļĢāļēāļŠāļ•āđŒāļ‚āļ­āļ‡āļ āļēāļžāļ–āđˆāļēāļĒāļ”āđ‰āļ§āļĒāđ€āļ—āļ„āļ™āļīāļ„ CLAHE āđ‚āļ”āļĒāđƒāļŠāđ‰āđāļšāļšāļˆāļģāļĨāļ­āļ‡āļŠāļĩ CIE L^* a^* b^* āđāļĨāļ°āļ‚āļąāđ‰āļ™āļ•āļ­āļ™āļ–āļąāļ”āđ„āļ›āļˆāļ°āļ—āļģāļāļēāļĢāļĒāļ·āļ”āđāļĨāļ°āļŠāđ€āļāļĨāļŪāļīāļŠāđ‚āļ—āđāļāļĢāļĄāļ‚āļ­āļ‡āļ āļēāļžāđƒāļŦāļĄāđˆāļ”āđ‰āļ§āļĒāđ€āļ—āļ„āļ™āļīāļ„āļāļēāļĢāļŠāđ€āļāļĨāļŪāļīāļŠāđ‚āļ—āđāļāļĢāļĄāđ€āļžāļ·āđˆāļ­āļ›āļĢāļąāļšāļ›āļĢāļļāļ‡āļ­āļ­āļŸāđ€āļ‹āļ•āļ„āļ§āļēāļĄāļŠāļ§āđˆāļēāļ‡āđ‚āļ”āļĒāļĢāļ§āļĄāļ‚āļ­āļ‡āļ āļēāļžāļ–āđˆāļēāļĒāđƒāļŦāđ‰āđ€āļ›āđ‡āļ™āļ•āļēāļĄāļĄāļēāļ•āļĢāļāļēāļ™āļŠāđˆāļ§āļ‡āļ„āļ§āļēāļĄāļŠāļ§āđˆāļēāļ‡āļ‚āļ­āļ‡āļ āļēāļžāļ–āđˆāļēāļĒāļŠāļĩāļˆāļ­āļ•āļēāļ—āļĩāđˆāļ”āļĩāļ‚āļ­āļ‡ Hubbard āļ‡āļēāļ™āļ§āļīāļˆāļąāļĒāļŠāļīāđ‰āļ™āļ™āļĩāđ‰āđƒāļŠāđ‰āļ āļēāļžāļ–āđˆāļēāļĒāļŠāļĩāļˆāļ­āļ•āļēāđƒāļ™āļāļēāļĢāļ—āļ”āļĨāļ­āļ‡āļˆāļēāļāļŠāļ­āļ‡āļŠāļļāļ”āļ‚āđ‰āļ­āļĄāļđāļĨ āđ„āļ”āđ‰āđāļāđˆ āļŠāļļāļ”āļ‚āđ‰āļ­āļĄāļđāļĨ DiaretDB0 āđāļĨāļ°āļŠāļļāļ”āļ‚āđ‰āļ­āļĄāļđāļĨ STARE āļœāļĨāļāļēāļĢāļ—āļ”āļĨāļ­āļ‡āđāļŠāļ”āļ‡āđƒāļŦāđ‰āđ€āļŦāđ‡āļ™āļ§āđˆāļēāļ§āļīāļ˜āļĩāļ—āļĩāđˆāļ™āļģāđ€āļŠāļ™āļ­āđƒāļŦāđ‰āļ āļēāļžāļœāļĨāļĨāļąāļžāļ˜āđŒāļ—āļĩāđˆāļĄāļĩāļ„āļ­āļ™āļ—āļĢāļēāļŠāļ•āđŒāđāļĨāļ°āļŠāļĄāļ”āļļāļĨāļŠāļĩāļ—āļĩāđˆāļ”āļĩāļ‚āļķāđ‰āļ™āđāļĨāļ°āđ€āļŦāļĄāļēāļ°āļŠāļĄāļŠāļģāļŦāļĢāļąāļšāđƒāļŠāđ‰āđƒāļ™āļāļēāļĢāļ§āļīāđ€āļ„āļĢāļēāļ°āļŦāđŒāļĢāļ­āļĒāđ‚āļĢāļ„āļ•āļēāļĄāļĄāļēāļ•āļĢāļāļēāļ™āļ‚āļ­āļ‡ Hubbar

    Enhancement Citra Fundus Retina Menggunakan CLAHE dan Wiener Filter

    Get PDF
    Penggunaan gambar fundus retina dalam pendeteksian dan diagnosis awal kelainan atau penyakit pada retina seperti penyakit Diabetic Retinopaty (DR), penyakit kardiovaskular dan penyakit lainnya sekarang ini telah menjadi salah satu bidang yang menarik perhatian bagi para peneliti dan dokter. Tetapi gambar fundus tersebut kadang-kadang memiliki kualitas yang buruk seperti terdapat noise di dalamnya, pencahayaan yang tidak merata serta memiliki kontras yang rendah. Dalam paper ini kami mengusulkan metode untuk meningkatkan kontras dan kualitas gambar fundus serta peningkatan nilai PSNR dari gambar fundus retina asli dengan menggunakan Fast Local Laplacian Filter, Morphology Top-hat Filter, CLAHE dan Wiener Filter.

    Modelling on-demand preprocessing framework towards practical approach in clinical analysis of diabetic retinopathy

    Get PDF
    Diabetic retinopathy (DR) refers to a complication of diabetes and a prime cause of vision loss in middle-aged people. A timely screening and diagnosis process can reduce the risk of blindness. Fundus imaging is mainly preferred in the clinical analysis of DR. However; the raw fundus images are usually subjected to artifacts, noise, low and varied contrast, which is very hard to process by human visual systems and automated systems. In the existing literature, many solutions are given to enhance the fundus image. However, such approaches are particular and limited to a specific objective that cannot address multiple fundus images. This paper has presented an on-demand preprocessing frame work that integrates different techniques to address geometrical issues, random noises, and comprehensive contrast enhancement solutions. The performance of each preprocessing process is evaluated against peak signal-to-noise ratio (PSNR), and brightness is quantified in the enhanced image. The motive of this paper is to offer a flexible approach of preprocessing mechanism that can meet image enhancement needs based on different preprocessing requirements to improve the quality of fundus imaging towards early-stage diabetic retinopathy identification

    Low contrast detection factor based contrast enhancement and restoration for underwater images

    Get PDF
    Marine ecosystem is the largest of earth’s aquatic ecosystems. It includes salt marshes, coral reefs, deep sea, sea floor, etc. To learn deep about the activities taking place inside, underwater imaging is a tool. But these images lack in contrast and brightness leading to the lack of information in the ocean activities. To enhance such low contrast underwater images, Low Contrast Detection Factor (LCDF) is proposed in this study. It uses the value, saturation and hue to enhance the low contrast regions and to restore the color.  Quality assessment is done to substantiate the proposed algorithm. It is found that the entropy gives an average of 7.3. No-reference Quality Metrics such as Natural Image Quality Evaluator and Blind/ Referenceless Image Spatial Quality Evaluator shows an average value of 3.6 and 22.5, respectively. The blur metrics shows a value of 0.21. The quality metrics indicates that the naturalness of the underwater image is maintained while the contrast of the underwater image has increased

    Low contrast detection factor based contrast enhancement and restoration for underwater images

    Get PDF
    7-13Marine ecosystem is the largest of earth’s aquatic ecosystems. It includes salt marshes, coral reefs, deep sea, sea floor, etc. To learn deep about the activities taking place inside, underwater imaging is a tool. But these images lack in contrast and brightness leading to the lack of information in the ocean activities. To enhance such low contrast underwater images, Low Contrast Detection Factor (LCDF) is proposed in this study. It uses the value, saturation and hue to enhance the low contrast regions and to restore the color. Quality assessment is done to substantiate the proposed algorithm. It is found that the entropy gives an average of 7.3. No-reference Quality Metrics such as Natural Image Quality Evaluator and Blind/ Referenceless Image Spatial Quality Evaluator shows an average value of 3.6 and 22.5, respectively. The blur metrics shows a value of 0.21. The quality metrics indicates that the naturalness of the underwater image is maintained while the contrast of the underwater image has increased

    Contrast and color balance enhancement for non-uniform illumination retinal images

    Get PDF
    Color retinal images play an important role in supporting a medical diagnosis. However, some retinal images are unsuitable for diagnosis due to the non-uniform illumination. In order to solve this problem, we propose a method for improving non-uniform illumination that can enhance the image quality of a color fundus photograph suitable for reliable visual diagnosis. Firstly, a hidden anatomical structure in dark regions of the retinal images is revealed by improving the image luminosity with gamma correction. Secondly, multi-scale tone manipulation is then used to adjust the image contrast in the lightness channel of L*a*b* color space. Finally, color balance is adjusted by specifying the image brightness based on Hubbard’s specification. The performance of the applied method has been evaluated against the data from the DIARETDB1 dataset. The results obtained show that the proposed algorithm performs well for correcting the non-uniform illumination of color retinal images

    Deep Generative Modeling Based Retinal Image Analysis

    Get PDF
    In the recent past, deep learning algorithms have been widely used in retinal image analysis (fundus and OCT) to perform tasks like segmentation and classification. But to build robust and highly efficient deep learning models amount of the training images, the quality of the training images is extremely necessary. The quality of an image is also an extremely important factor for the clinical diagnosis of different diseases. The main aim of this thesis is to explore two relatively under-explored area of retinal image analysis, namely, the retinal image quality enhancement and artificial image synthesis. In this thesis, we proposed a series of deep generative modeling based algorithms to perform these above-mentioned tasks. From a mathematical perspective, the generative model is a statistical model of the joint probability distribution between an observable variable and a target variable. The generative adversarial network (GAN), variational auto-encoder(VAE) are some popular generative models. Generative models can be used to generate new samples from a given distribution. The OCT images have inherent speckle noise in it, fundus images do not suffer from noises in general, but the newly developed tele-ophthalmoscope devices produce images with relatively low spatial resolution and blur. Different GAN based algorithms were developed to generate corresponding high-quality images fro its low-quality counterpart. A combination of residual VAE and GAN was implemented to generate artificial retinal fundus images with their corresponding artificial blood vessel segmentation maps. This will not only help to generate new training images as many as needed but also will help to reduce the privacy issue of releasing personal medical data

    Procesado de retinografías basado en Deep Learning para la ayuda al diagnÃģstico de la Retinopatía DiabÃĐtica

    Get PDF
    La Retinopatía DiabÃĐtica (RD) es una complicaciÃģn de la diabetes y es la causa mÃĄs frecuente de ceguera en la poblaciÃģn laboral activa de los países desarrollados. Sin embargo, cuando se trata de forma precoz, mÃĄs del 90% de la pÃĐrdida de visiÃģn se puede prevenir. Las retinografías capturadas durante exÃĄmenes oculares regulares son el mÃĐtodo estÃĄndar para detectar RD. No obstante, el aumento de los casos de diabetes a nivel mundial y la falta de especialistas dificultan el diagnÃģstico. Las imÃĄgenes de fondo de ojo generalmente se obtienen usando cÃĄmaras de fondo de ojo en condiciones de luz y ÃĄngulos variados. Por lo tanto, estas imÃĄgenes son propensas a una iluminaciÃģn no uniforme, contraste deficiente, bajo brillo y falta de nitidez, lo que provoca imÃĄgenes borrosas. Estas imÃĄgenes borrosas o con falta de iluminaciÃģn podrían afectar el diagnÃģstico clínico. Por lo tanto, mejorar estas imÃĄgenes de calidad insuficiente puede ser muy Útil para evitar diagnÃģsticos errÃģneos en sistemas de cribado automÃĄticos o manuales. Recientemente, el aprendizaje automÃĄtico, especialmente las tÃĐcnicas basadas en Deep Learning, han supuesto una revoluciÃģn en el campo de la reconstrucciÃģn de imÃĄgenes. Por ello, en este trabajo, se propone un mÃĐtodo de mejora de calidad de retinografías basado en redes de generativas antagÃģnicas (Generative Adversarial Network, GAN). El modelo estÃĄ formado por dos redes neuronales convolucionales: una red neuronal que actÚa como generador de imÃĄgenes sintÃĐticas con el objetivo de engaÃąar a una red discriminadora que estÃĄ entrenada para distinguir las imÃĄgenes generadas de alta calidad de las imÃĄgenes reales. Este modelo puede funcionar con imÃĄgenes de gran resoluciÃģn, lo que lo hace ampliamente beneficioso para las imÃĄgenes clínicas. En este trabajo, la mejora de calidad de la imagen de fondo de ojo abarca una fase de correcciÃģn de la nitidez y una segunda fase de correcciÃģn de la iluminaciÃģn. Para el desarrollo y validaciÃģn del mÃĐtodo propuesto, se utilizÃģ una base de datos propia de 1000 imÃĄgenes. Dichas imÃĄgenes se dividieron en un conjunto de entrenamiento con 800 imÃĄgenes de entrenamiento y un conjunto de test con 200 imÃĄgenes, de las cuales la mitad tenían calidad insuficiente para su anÃĄlisis. Sobre ellas, se aplicÃģ un mÃĐtodo con varias etapas. En primer lugar, se abordÃģ la mejora de imÃĄgenes borrosas empleando una red profunda de tipo GAN. En segundo lugar, se abordÃģ la mejora de imÃĄgenes con falta de iluminaciÃģn, tambiÃĐn a travÃĐs de una red GAN. Cualitativamente, los resultados obtenidos son satisfactorios. Asimismo, se abordÃģ la evaluaciÃģn cuantitativa de los resultados desde dos perspectivas: evaluaciÃģn con referencia y evaluaciÃģn sin referencia. Para la evaluaciÃģn sin referencia, se utilizan las mÃĐtricas Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), Natural Image Quality Evaluator (NIQE) y entropía. En cuanto a la evaluaciÃģn con una imagen de referencia, se utilizaron la relaciÃģn seÃąal a ruido (Peak Signal-to-Noise Ratio, PSNR) y el índice de similitud estructural (Structural Similarity Index Measure, SSIM). La evaluaciÃģn con referencia sirve como guía para comparar las imÃĄgenes de buena calidad que han sido degradadas intencionadamente. Por otra parte, la evaluaciÃģn sin referencia es necesaria para evaluar la mejora que el mÃĐtodo produce sobre imÃĄgenes de mala calidad ya que, de partida, no se dispone de una versiÃģn de buena calidad de dichas imÃĄgenes. En la fase de mejora de nitidez y sobre las imÃĄgenes de test buena calidad, los resultados obtenidos muestran una mejora del 6.22%, 3.33% y 3.26% en tÃĐrminos de PSNR, SSIM y entropía, respectivamente. No obstante, las medidas BRISQUE y NIQE no presentan una mejora. En esta misma etapa, pero sobre las imÃĄgenes de test mala calidad los resultados muestran un 31.80%, 4.27% y 3.89% de mejora en tÃĐrminos de BRISQUE, NIQE y entropía respecto a la imagen original real. Asimismo, en la fase de mejora de imÃĄgenes con falta de iluminaciÃģn, los resultados sobre el conjunto de imÃĄgenes de buena calidad muestran una mejora del 156.81%, 14.59%, 3.12% y 2.28% en tÃĐrminos de PSNR, SSIM, BRISQUE y NIQE; mientras que la entropía no presenta una mejoría. En esta fase, y sobre el conjunto de imÃĄgenes de mala calidad los resultados reflejan una mejora del 50.62% y un 8.33% en tÃĐrminos de BRISQUE y entropía. Sin embargo, en este grupo de imÃĄgenes, la medida NIQE no mejora. Finalmente, se ha llevado a cabo un Último experimento con ambas redes en serie. En primer lugar, las imÃĄgenes atraviesan la red que corrige la iluminaciÃģn, y posteriormente se corrige su nitidez con la segunda red. Sobre las imÃĄgenes de test de buena calidad se ha conseguido un 4.84%, 5.68%, 3.38% y 2.57% de mejora respecto de la imagen original en tÃĐrminos de PSNR, SSIM, NIQE y entropía, aunque no se observa mejora en tÃĐrminos de BRISQUE. En este Último experimento, y sobre las imÃĄgenes de test de mala calidad se ha obtenido un 88.95%, 21.17% y 2.46% de mejora en tÃĐrminos de BRISQUE, NIQE y entropía. Los resultados obtenidos muestran que el mÃĐtodo propuesto podría ser utilizado como primera etapa dentro de sistemas automÃĄticos de anÃĄlisis de retinografías para la ayuda al diagnÃģstico de diversas enfermedades oculares.Diabetic Retinopathy (DR) is a complication of diabetes and the leading cause of blindness worldwide. However, when treated early, more than 90% of vision loss can be prevented. Color fundus photography has been the standard method for detecting DR. However, the growing incidence of diabetes and the lack of specialists make diagnosis difficult. Fundus images are generally obtained using fundus cameras in varied light conditions and angles. Thence, these images are prone to non-uniform illumination, poor contrast, low brightness and lack of sharpness resulting in blurry images. These blurry or poor illuminated images could affect clinical diagnosis. Therefore, improving these poor-quality images can be very helpful in avoiding misdiagnosis in automatic or manual screening systems. Recently, machine learning, especially deep learning techniques, have brought revolution to image super resolution reconstruction. For this reason, in this work, we propose a retinal fundus image enhancement method based on Generative Adversarial Networks (GAN). The model is composed of two convolutional neural networks: a neural network that acts as a generator of synthetic images with the aim of tricking a discriminating network that is trained to distinguish high-quality generated images from real images. This model can work with high resolution images, which makes it widely beneficial for clinical images. In this work, the fundus image enhancement method includes both the sharpness correction and the lighting correction. The proposed technique was evaluated in a proprietary database of 200 images, of which half were of insufficient quality. A method with several stages was applied to them. Firstly, blurry image enhancement was addressed by a GAN network. Secondly, the improvement of images with lack of lighting was addressed, also through a GAN network. To evaluate the retinal image enhancement performance, visual and quantitative evaluation were carried out. Two kinds of image quality assessment were adopted: full-reference and no-reference evaluation. For no-reference assessment, Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), Natural Image Quality Evaluator (NIQE) and Entropy were chosen to assess each enhanced image and its original blurry retinal image. As to full-reference assessment, Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) were used. SSIM and PSNR give the comparison between the enhanced image and the original image. Quantitatively, in the blurred image improvement phase using good quality images, the results obtained show that it is possible to achieve an improvement of 6.22%, 3.33% and 3.26% in terms of PSNR, SSIM and entropy. However, the BRISQUE and NIQE measures do not show an improvement. In this same stage, but on the images of poor quality, the results show a 31.80%, 4.27% and 3.89% improvement in terms of BRISQUE, NIQE and entropy with respect to the real original image. Likewise, in the improvement phase of images with lack of lighting, the results on the set of good quality images show an improvement of 156.81%, 14.59%, 3.12% and 2.28% in terms of PSNR, SSIM, BRISQUE and NIQE; while entropy does not improve. In this phase, using the set of poor-quality images, the results reflect an improvement of 50.62% and 8.33% in terms of BRISQUE and entropy. However, in this group of images the NIQE measure does not improve. Finally, a last experiment was carried out with both networks. First, the images passed through the GAN network that corrected their lighting, and then their sharpness was corrected with the second GAN network. On the good quality test images, the results obtained show an improvement of 4.84%, 5.68%, 3.38% and 2.57% in terms of PSNR, SSIM, NIQE and entropy, although the BRISQUE measure does not improve. In this last experiment, and on the poor-quality test images, the results show an improvement of 88.95%, 21.17% and 2.46% in terms of BRISQUE, NIQE and entropy. The results indicate that the proposed method could be used as a first stage in automatic retinography analysis systems to aid in the diagnosis of various eye diseases.Grado en Ingeniería de Tecnologías de TelecomunicaciÃģ
    corecore