240 research outputs found

    A method for quantifying sectoral optic disc pallor in fundus photographs and its association with peripapillary RNFL thickness

    Full text link
    Purpose: To develop an automatic method of quantifying optic disc pallor in fundus photographs and determine associations with peripapillary retinal nerve fibre layer (pRNFL) thickness. Methods: We used deep learning to segment the optic disc, fovea, and vessels in fundus photographs, and measured pallor. We assessed the relationship between pallor and pRNFL thickness derived from optical coherence tomography scans in 118 participants. Separately, we used images diagnosed by clinical inspection as pale (N=45) and assessed how measurements compared to healthy controls (N=46). We also developed automatic rejection thresholds, and tested the software for robustness to camera type, image format, and resolution. Results: We developed software that automatically quantified disc pallor across several zones in fundus photographs. Pallor was associated with pRNFL thickness globally (\b{eta} = -9.81 (SE = 3.16), p < 0.05), in the temporal inferior zone (\b{eta} = -29.78 (SE = 8.32), p < 0.01), with the nasal/temporal ratio (\b{eta} = 0.88 (SE = 0.34), p < 0.05), and in the whole disc (\b{eta} = -8.22 (SE = 2.92), p < 0.05). Furthermore, pallor was significantly higher in the patient group. Lastly, we demonstrate the analysis to be robust to camera type, image format, and resolution. Conclusions: We developed software that automatically locates and quantifies disc pallor in fundus photographs and found associations between pallor measurements and pRNFL thickness. Translational relevance: We think our method will be useful for the identification, monitoring and progression of diseases characterized by disc pallor/optic atrophy, including glaucoma, compression, and potentially in neurodegenerative disorders.Comment: 44 pages, 20 figures, 7 tables, submitte

    Learning the Retinal Anatomy From Scarce Annotated Data Using Self-Supervised Multimodal Reconstruction

    Get PDF
    [Abstract] Deep learning is becoming the reference paradigm for approaching many computer vision problems. Nevertheless, the training of deep neural networks typically requires a significantly large amount of annotated data, which is not always available. A proven approach to alleviate the scarcity of annotated data is transfer learning. However, in practice, the use of this technique typically relies on the availability of additional annotations, either from the same or natural domain. We propose a novel alternative that allows to apply transfer learning from unlabelled data of the same domain, which consists in the use of a multimodal reconstruction task. A neural network trained to generate one image modality from another must learn relevant patterns from the images to successfully solve the task. These learned patterns can then be used to solve additional tasks in the same domain, reducing the necessity of a large amount of annotated data. In this work, we apply the described idea to the localization and segmentation of the most important anatomical structures of the eye fundus in retinography. The objective is to reduce the amount of annotated data that is required to solve the different tasks using deep neural networks. For that purpose, a neural network is pre-trained using the self-supervised multimodal reconstruction of fluorescein angiography from retinography. Then, the network is fine-tuned on the different target tasks performed on the retinography. The obtained results demonstrate that the proposed self-supervised transfer learning strategy leads to state-of-the-art performance in all the studied tasks with a significant reduction of the required annotations.This work is supported by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project, and by Ministerio de Economía, Industria y Competitividad, Government of Spain, through the DPI2015-69948-R research project. The authors of this work also receive financial support from the ERDF and Xunta de Galicia (Spain) through Grupo de Referencia Competitiva, ref. ED431C 2016-047, and from the European Social Fund (ESF) of the EU and Xunta de Galicia (Spain) through the predoctoral grant contract ref. ED481A-2017/328. CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia (Spain) , through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; ED431C 2016-047Xunta de Galicia ; ED481A-2017/328Xunta de Galicia; ED431G 2019/0

    Digital ocular fundus imaging: a review

    Get PDF
    Ocular fundus imaging plays a key role in monitoring the health status of the human eye. Currently, a large number of imaging modalities allow the assessment and/or quantification of ocular changes from a healthy status. This review focuses on the main digital fundus imaging modality, color fundus photography, with a brief overview of complementary techniques, such as fluorescein angiography. While focusing on two-dimensional color fundus photography, the authors address the evolution from nondigital to digital imaging and its impact on diagnosis. They also compare several studies performed along the transitional path of this technology. Retinal image processing and analysis, automated disease detection and identification of the stage of diabetic retinopathy (DR) are addressed as well. The authors emphasize the problems of image segmentation, focusing on the major landmark structures of the ocular fundus: the vascular network, optic disk and the fovea. Several proposed approaches for the automatic detection of signs of disease onset and progression, such as microaneurysms, are surveyed. A thorough comparison is conducted among different studies with regard to the number of eyes/subjects, imaging modality, fundus camera used, field of view and image resolution to identify the large variation in characteristics from one study to another. Similarly, the main features of the proposed classifications and algorithms for the automatic detection of DR are compared, thereby addressing computer-aided diagnosis and computer-aided detection for use in screening programs.Fundação para a Ciência e TecnologiaFEDErPrograma COMPET

    Optic Disc and Fovea Localisation in Ultra-widefield Scanning Laser Ophthalmoscope Images Captured in Multiple Modalities

    Get PDF
    We propose a convolutional neural network for localising the centres of the optic disc (OD) and fovea in ultra-wide field of view scanning laser ophthalmoscope (UWFoV-SLO) images of the retina. Images captured in both reflectance and autofluorescence (AF) modes, and central pole and eyesteered gazes, were used. The method achieved an OD localisation accuracy of 99.4% within one OD radius, and fovea localisation accuracy of 99.1% within one OD radius on a test set comprising of 1790 images. The performance of fovea localisation in AF images was comparable to the variation between human annotators at this task. The laterality of the image (whether the image is of the left or right eye) was inferred from the OD and fovea coordinates with an accuracy of 99.9%

    Automatic Grading of Diabetic Retinopathy on a Public Database

    Get PDF
    With the growing diabetes epidemic, retina specialists have to examine a tremendous amount of fundus images for the detection and grading of diabetic retinopathy. In this study, we propose a first automatic grading system for diabetic retinopathy. First, a red lesion detection is performed to generate a lesion probability map. The latter is then represented by 35 features combining location, size and probability information, which are finally used for classification. A leave-one-out cross-validation using a random forest is conducted on a public database of 1200 images, to classify the images into 4 grades. The proposed system achieved a classification accuracy of 74.1% and a weighted kappa value of 0.731 indicating a significant agreement with the reference. These preliminary results prove that automatic DR grading is feasible, with a performance comparable to that of human experts

    Automatic analysis of retinal images to aid in the diagnosis and grading of diabetic retinopathy

    Get PDF
    Diabetic retinopathy (DR) is the most common complication of diabetes mellitus and one of the leading causes of preventable blindness in the adult working population. Visual loss can be prevented from the early stages of DR, when the treatments are effective. Therefore, early diagnosis is paramount. However, DR may be clinically asymptomatic until the advanced stage, when vision is already affected and treatment may become difficult. For this reason, diabetic patients should undergo regular eye examinations through screening programs. Traditionally, DR screening programs are run by trained specialists through visual inspection of the retinal images. However, this manual analysis is time consuming and expensive. With the increasing incidence of diabetes and the limited number of clinicians and sanitary resources, the early detection of DR becomes non-viable. For this reason, computed-aided diagnosis (CAD) systems are required to assist specialists for a fast, reliable diagnosis, allowing to reduce the workload and the associated costs. We hypothesize that the application of novel, automatic algorithms for fundus image analysis could contribute to the early diagnosis of DR. Consequently, the main objective of the present Doctoral Thesis is to study, design and develop novel methods based on the automatic analysis of fundus images to aid in the screening, diagnosis, and treatment of DR. In order to achieve the main goal, we built a private database and used five retinal public databases: DRIMDB, DIARETDB1, DRIVE, Messidor and Kaggle. The stages of fundus image processing covered in this Thesis are: retinal image quality assessment (RIQA), the location of the optic disc (OD) and the fovea, the segmentation of RLs and EXs, and the DR severity grading. RIQA was studied with two different approaches. The first approach was based on the combination of novel, global features. Results achieved 91.46% accuracy, 92.04% sensitivity, and 87.92% specificity using the private database. We developed a second approach aimed at RIQA based on deep learning. We achieved 95.29% accuracy with the private database and 99.48% accuracy with the DRIMDB database. The location of the OD and the fovea was performed using a combination of saliency maps. The proposed methods were evaluated over the private database and the public databases DRIVE, DIARETDB1 and Messidor. For the OD, we achieved 100% accuracy for all databases except Messidor (99.50%). As for the fovea location, we also reached 100% accuracy for all databases except Messidor (99.67%). The joint segmentation of RLs and EXs was accomplished by decomposing the fundus image into layers. Results were computed per pixel and per image. Using the private database, 88.34% per-image accuracy (ACCi) was reached for the RL detection and 95.41% ACCi for EX detection. An additional method was proposed for the segmentation of RLs based on superpixels. Evaluating this method with the private database, we obtained 84.45% ACCi. Results were validated using the DIARETDB1 database. Finally, we proposed a deep learning framework for the automatic DR severity grading. The method was based on a novel attention mechanism which performs a separate attention of the dark and the bright structures of the retina. The Kaggle DR detection dataset was used for development and validation. The International Clinical DR Scale was considered, which is made up of 5 DR severity levels. Classification results for all classes achieved 83.70% accuracy and a Quadratic Weighted Kappa of 0.78. The methods proposed in this Doctoral Thesis form a complete, automatic DR screening system, contributing to aid in the early detection of DR. In this way, diabetic patients could receive better attention for their ocular health avoiding vision loss. In addition, the workload of specialists could be relieved while healthcare costs are reduced.La retinopatía diabética (RD) es la complicación más común de la diabetes mellitus y una de las principales causas de ceguera prevenible en la población activa adulta. El diagnóstico precoz es primordial para prevenir la pérdida visual. Sin embargo, la RD es clínicamente asintomática hasta etapas avanzadas, cuando la visión ya está afectada. Por eso, los pacientes diabéticos deben someterse a exámenes oftalmológicos periódicos a través de programas de cribado. Tradicionalmente, estos programas están a cargo de especialistas y se basan de la inspección visual de retinografías. Sin embargo, este análisis manual requiere mucho tiempo y es costoso. Con la creciente incidencia de la diabetes y la escasez de recursos sanitarios, la detección precoz de la RD se hace inviable. Por esta razón, se necesitan sistemas de diagnóstico asistido por ordenador (CAD) que ayuden a los especialistas a realizar un diagnóstico rápido y fiable, que permita reducir la carga de trabajo y los costes asociados. El objetivo principal de la presente Tesis Doctoral es estudiar, diseñar y desarrollar nuevos métodos basados en el análisis automático de retinografías para ayudar en el cribado, diagnóstico y tratamiento de la RD. Las etapas estudiadas fueron: la evaluación de la calidad de la imagen retiniana (RIQA), la localización del disco óptico (OD) y la fóvea, la segmentación de RL y EX y la graduación de la severidad de la RD. RIQA se estudió con dos enfoques diferentes. El primer enfoque se basó en la combinación de características globales. Los resultados lograron una precisión del 91,46% utilizando la base de datos privada. El segundo enfoque se basó en aprendizaje profundo. Logramos un 95,29% de precisión con la base de datos privada y un 99,48% con la base de datos DRIMDB. La localización del OD y la fóvea se realizó mediante una combinación de mapas de saliencia. Los métodos propuestos fueron evaluados sobre la base de datos privada y las bases de datos públicas DRIVE, DIARETDB1 y Messidor. Para el OD, logramos una precisión del 100% para todas las bases de datos excepto Messidor (99,50%). En cuanto a la ubicación de la fóvea, también alcanzamos un 100% de precisión para todas las bases de datos excepto Messidor (99,67%). La segmentación conjunta de RL y EX se logró descomponiendo la imagen del fondo de ojo en capas. Utilizando la base de datos privada, se alcanzó un 88,34% de precisión por imagen (ACCi) para la detección de RL y un 95,41% de ACCi para la detección de EX. Se propuso un método adicional para la segmentación de RL basado en superpíxeles. Evaluando este método con la base de datos privada, obtuvimos 84.45% ACCi. Los resultados se validaron utilizando la base de datos DIARETDB1. Finalmente, propusimos un método de aprendizaje profundo para la graduación automática de la gravedad de la DR. El método se basó en un mecanismo de atención. Se utilizó la base de datos Kaggle y la Escala Clínica Internacional de RD (5 niveles de severidad). Los resultados de clasificación para todas las clases alcanzaron una precisión del 83,70% y un Kappa ponderado cuadrático de 0,78. Los métodos propuestos en esta Tesis Doctoral forman un sistema completo y automático de cribado de RD, contribuyendo a ayudar en la detección precoz de la RD. De esta forma, los pacientes diabéticos podrían recibir una mejor atención para su salud ocular evitando la pérdida de visión. Además, se podría aliviar la carga de trabajo de los especialistas al mismo tiempo que se reducen los costes sanitarios.Escuela de DoctoradoDoctorado en Tecnologías de la Información y las Telecomunicacione
    corecore