9 research outputs found

    Evaluation of Retinal Image Quality Assessment Networks in Different Color-spaces

    Full text link
    Retinal image quality assessment (RIQA) is essential for controlling the quality of retinal imaging and guaranteeing the reliability of diagnoses by ophthalmologists or automated analysis systems. Existing RIQA methods focus on the RGB color-space and are developed based on small datasets with binary quality labels (i.e., `Accept' and `Reject'). In this paper, we first re-annotate an Eye-Quality (EyeQ) dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., `Good', `Usable' and `Reject') for evaluating RIQA methods. Our RIQA dataset is characterized by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net), which integrates the different color-space representations at both a feature-level and prediction-level to predict image quality grades. Experiments on our EyeQ dataset show that our MCF-Net obtains a state-of-the-art performance, outperforming the other deep learning methods. Furthermore, we also evaluate diabetic retinopathy (DR) detection methods on images of different quality, and demonstrate that the performances of automated diagnostic systems are highly dependent on image quality.Comment: Accepted by MICCAI 2019. Corrected two typos in Table 1 as: (1) in training set, the number of "Usable + All" should be '1,876'; (2) In testing set, the number of "Total + DR-0" should be '11,362'. Project page: https://github.com/hzfu/Eye

    A ResNet is All You Need? Modeling A Strong Baseline for Detecting Referable Diabetic Retinopathy in Fundus Images

    Full text link
    Deep learning is currently the state-of-the-art for automated detection of referable diabetic retinopathy (DR) from color fundus photographs (CFP). While the general interest is put on improving results through methodological innovations, it is not clear how good these approaches perform compared to standard deep classification models trained with the appropriate settings. In this paper we propose to model a strong baseline for this task based on a simple and standard ResNet-18 architecture. To this end, we built on top of prior art by training the model with a standard preprocessing strategy but using images from several public sources and an empirically calibrated data augmentation setting. To evaluate its performance, we covered multiple clinically relevant perspectives, including image and patient level DR screening, discriminating responses by input quality and DR grade, assessing model uncertainties and analyzing its results in a qualitative manner. With no other methodological innovation than a carefully designed training, our ResNet model achieved an AUC = 0.955 (0.953 - 0.956) on a combined test set of 61007 test images from different public datasets, which is in line or even better than what other more complex deep learning models reported in the literature. Similar AUC values were obtained in 480 images from two separate in-house databases specially prepared for this study, which emphasize its generalization ability. This confirms that standard networks can still be strong baselines for this task if properly trained.Comment: Accepted for publication at the 18th International Symposium on Medical Information Processing and Analysis (SIPAIM 2022

    An Image Quality Selection and Effective Denoising on Retinal Images Using Hybrid Approaches

    Get PDF
    Retinal image analysis has remained an essential topic of research in the last decades. Several algorithms and techniques have been developed for the analysis of retinal images. Most of these techniques use benchmark retinal image datasets to evaluate performance without first exploring the quality of the retinal image. Hence, the performance metrics evaluated by these approaches are uncertain. In this paper, the quality of the images is selected by utilizing the hybrid naturalness image quality evaluator and the perception-based image quality evaluator (hybrid NIQE-PIQE) approach. Here, the raw input image quality score is evaluated using the Hybrid NIQE-PIQE approach. Based on the quality score value, the deep learning convolutional neural network (DCNN) categorizes the images into low quality, medium quality and high quality images. Then the selected quality images are again pre-processed to remove the noise present in the images. The individual green channel (G-channel) is extracted from the selected quality RGB images for noise filtering. Moreover, hybrid modified histogram equalization and homomorphic filtering (Hybrid G-MHE-HF) are utilized for enhanced noise filtering. The implementation of proposed scheme is implemented on MATLAB 2021a. The performance of the implemented method is compared with the other approaches to the accuracy, sensitivity, specificity, precision and F-score on DRIMDB and DRIVE datasets. The proposed scheme’s accuracy is 0.9774, sensitivity is 0.9562, precision is 0.99, specificity is 0.99, and F-measure is 0.9776 on the DRIMDB dataset, respectively

    The use of datasets of bad quality images to define fundus image quality

    Get PDF
    Screening programs for sight-threatening diseases rely on the grading of a large number of digital retinal images. As automatic image grading technology evolves, there emerges a need to provide a rigorous definition of image quality with reference to the grading task. In this work, on two subsets of the CORD database of clinically gradable and matching non-gradable digital retinal images, a feature set based on statistical and on task-specific morphological features has been identified. A machine learning technique has then been demonstrated to classify the images as per their clinical gradeability, offering a proxy for a rigorous definition of image quality

    A deep learning model to assess and enhance eye fundus image quality

    Get PDF
    Engineering aims to design, build, and implement solutions that will increase and/or improve the life quality of human beings. Likewise, from medicine, solutions are generated for the same purposes, enabling these two knowledge areas to converge for a common goal. With the thesis work “A Deep Learning Model to Assess and Enhance Eye Fundus Image Quality", a model was proposed and implement a model that allows us to evaluate and enhance the quality of fundus images, which contributes to improving the efficiency and effectiveness of a subsequent diagnosis based on these images. On the one hand, for the evaluation of these images, a model based on a lightweight convolutional neural network architecture was developed, termed as Mobile Fundus Quality Network (MFQ-Net). This model has approximately 90% fewer parameters than those of the latest generation. For its evaluation, the Kaggle public data set was used with two sets of quality annotations, binary (good and bad) and three classes (good, usable and bad) obtaining an accuracy of 0.911 and 0.856 in the binary mode and three classes respectively in the classification of the fundus image quality. On the other hand, a method was developed for eye fundus quality enhancement termed as Pix2Pix Fundus Oculi Quality Enhancement (P2P-FOQE). This method is based on three stages which are; pre-enhancement: for color adjustment, enhancement: with a Pix2Pix network (which is a Conditional Generative Adversarial Network) as the core of the method and post-enhancement: which is a CLAHE adjustment for contrast and detail enhancement. This method was evaluated on a subset of quality annotations for the Kaggle public database which was re-classified for three categories (good, usable, and poor) by a specialist from the Fundación Oftalmolóica Nacional. With this method, the quality of these images for the good class was improved by 72.33%. Likewise, the image quality improved from the bad class to the usable class, and from the bad class to the good class by 56.21% and 29.49% respectively.La ingeniería busca diseñar, construir e implementar soluciones que permitan aumentar y/o mejorar la calidad de vida de los seres humanos. Igualmente, desde la medicina son generadas soluciones con los mismos fines, posibilitando que estas dos áreas del conocimiento convergan por un bien común. Con el trabajo de tesis “A Deep Learning Model to Assess and Enhance Eye Fundus Image Quality”, se propuso e implementó un modelo que permite evaluar y mejorar la calidad de las imágenes de fondo de ojo, lo cual contribuye a mejorar la eficiencia y eficacia de un posterior diagnóstico basado en estas imágenes. Para la evaluación de estás imágenes, se desarrolló un modelo basado en una arquitectura de red neuronal convolucional ligera, la cual fue llamada Mobile Fundus Quality Network (MFQ-Net). Este modelo posee aproximadamente 90% menos parámetros que aquellos de última generación. Para su evaluación se utilizó el conjunto de datos públicos de Kaggle con dos sets de anotaciones de calidad, binario (buena y mala) y tres clases (buena, usable y mala) obteniendo en la tareas de clasificación de la calidad de la imagen de fondo de ojo una exactitud de 0.911 y 0.856 en la modalidad binaria y tres clases respectivamente. Por otra parte, se desarrolló un método el cual realiza una mejora de la calidad de imágenes de fondo de ojo llamado Pix2Pix Fundus Oculi Quality Enhacement (P2P-FOQE). Este método está basado en tres etapas las cuales son; premejora: para ajuste de color, mejora: con una red Pix2Pix (la cual es una Conditional Generative Adversarial Network) como núcleo del método y postmejora: la cual es un ajuste CLAHE para contraste y realce de detalles. Este método fue evaluado en un subconjunto de anotaciones de calidad para la base de datos pública de Kaggle el cual fue re clasificado por un especialista de la Fundación Oftalmológica Nacional para tres categorías (buena, usable y mala). Con este método fue mejorada la calidad de estas imágenes para la clase buena en un 72,33%. Así mismo, la calidad de imagen mejoró de la clase mala a la clase utilizable, y de la clase mala a clase buena en 56.21% y 29.49% respectivamente.Línea de investigación: Visión por computadora para análisis de imágenes médicasMaestrí

    Dépistage automatique de la rétinopathie diabétique dans les images de fond d’oeil à l’aide de l’apprentissage profond

    Get PDF
    RÉSUMÉ : Le diabète est une maladie chronique qui touche plus de 400 millions d’adultes dans le monde. Cette maladie peut entraîner plusieurs complications au cours de la vie d’un malade. Une de ces complications est la rétinopathie diabétique. Il s’agit de la principale cause de cécité chez l’adulte. Cette maladie apparaît souvent sans symptômes, il est donc important pour les personnes atteintes de diabète d’effectuer des vérifications régulières chez un ophtalmologue. Cette vérification s’effectue par la prise d’images numériques de fond d’oeil du patient. Ces images sont ensuite examinées par un médecin afin de donner un diagnostic. Dans ce travail, il est question d’automatiser le diagnostic de la rétinopathie diabétique à l’aide des images numériques de fond d’oeil ainsi que l’apprentissage profond. En effet, les réseaux de neurones ont suscité ces dernières années un intérêt important dans différents domaines, notamment ceux du médical et de la vision par ordinateur. Les réseaux convolutifs permettent des applications tel que la classification ou la segmentation d’images. Ici, la classification correspond à classer les images selon la présence ou non de la rétinopathie diabétique dans les images et la segmentation correspond à extraire des régions d’intérêt, comme les vaisseaux sanguins par exemple.----------ABSTRACT : Diabetes is a chronic disease that currently concerns more than 400 millions of adults in the world. This disease can cause several complications during the life of a person. One of these complications is diabetic retinopathy. Being one of the leading cause of blindness in the working age population, this complication is serious and requires medical prevention. This disease often appear without any symptoms, meaning that regular examinations with an ophtalmologist are required to enable its detection and treatment. This work is about automating the diagnostic of diabetic retinopathy, with the use of digital fundus images and deep learning. Indeed, deep learning and neural networks have recently been used in several fields, such as medical applications or computer vision. Convolutional neural networks perform applications such as image classification or image segmentation really well. Here, classification means to label each image based on the presence or absence of diabetic retinopthy in the images and segmentation means to extract regions of interest in the image, such as blood vessels
    corecore