18 research outputs found

    Automatic Classification of Bright Retinal Lesions via Deep Network Features

    Full text link
    The diabetic retinopathy is timely diagonalized through color eye fundus images by experienced ophthalmologists, in order to recognize potential retinal features and identify early-blindness cases. In this paper, it is proposed to extract deep features from the last fully-connected layer of, four different, pre-trained convolutional neural networks. These features are then feeded into a non-linear classifier to discriminate three-class diabetic cases, i.e., normal, exudates, and drusen. Averaged across 1113 color retinal images collected from six publicly available annotated datasets, the deep features approach perform better than the classical bag-of-words approach. The proposed approaches have an average accuracy between 91.23% and 92.00% with more than 13% improvement over the traditional state of art methods.Comment: Preprint submitted to Journal of Medical Imaging | SPIE (Tue, Jul 28, 2017

    Automatic Discrimination of Color Retinal Images using the Bag of Words Approach

    No full text
    International audienceDiabetic retinopathy (DR) and age related macular degeneration (ARMD) are among the major causes of visual impairment all over the world. DR is mainly characterized by small red spots, namely microaneurysms and bright lesions, specifically exudates. However, ARMD is mainly identified by tiny yellow or white deposits called drusen. Since exudates might be the only visible signs of the early diabetic retinopathy, there is an increase demand for automatic diagnosis of retinopathy. Exudates and drusen may share similar appearances; as a result discriminating between them plays a key role in improving screening performance. In this research, we investigative the role of bag of words approach in the automatic diagnosis of retinopathy diabetes. Initially, the color retinal images are preprocessed in order to reduce the intra and inter patient variability. Subsequently, SURF (Speeded up Robust Features), HOG (Histogram of Oriented Gradients), and LBP (Local Binary Patterns) descriptors are extracted. We proposed to use single-based and multiple-based methods to construct the visual dictionary by combining the histogram of word occurrences from each dictionary and building a single histogram. Finally, this histogram representation is fed into a support vector machine with linear kernel for classification. The introduced approach is evaluated for automatic diagnosis of normal and abnormal color retinal images with bright lesions such as drusen and exudates. This approach has been implemented on 430 color retinal images, including six publicly available datasets, in addition to one local dataset. The mean accuracies achieved are 97.2% and 99.77% for single-based and multiple-based dictionaries respectively

    Bright Lesion Detection in Color Fundus Images Based on Texture Features

    Get PDF
    In this paper a computer aided screening system for the detection of bright lesions or exudates using color fundus images is proposed. The proposed screening system is used to identify the suspicious regions for bright lesions. A texture feature extraction method is also demonstrated to describe the characteristics of region of interest. In final stage the normal and abnormal images are classified using Support vector machine classifier. Our proposed system obtained the effective detection performance compared to some of the state–of–art methods

    Automatic screening and grading of age-related macular degeneration from texture analysis of fundus images

    Get PDF
    Age-related macular degeneration (AMD) is a disease which causes visual deficiency and irreversible blindness to the elderly. In this paper, an automatic classification method for AMD is proposed to perform robust and reproducible assessments in a telemedicine context. First, a study was carried out to highlight the most relevant features for AMD characterization based on texture, color, and visual context in fundus images. A support vector machine and a random forest were used to classify images according to the different AMD stages following the AREDS protocol and to evaluate the features' relevance. Experiments were conducted on a database of 279 fundus images coming from a telemedicine platform. The results demonstrate that local binary patterns in multiresolution are the most relevant for AMD classification, regardless of the classifier used. Depending on the classification task, our method achieves promising performances with areas under the ROC curve between 0.739 and 0.874 for screening and between 0.469 and 0.685 for grading. Moreover, the proposed automatic AMD classification system is robust with respect to image quality

    The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey.

    Get PDF
    Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications

    Técnicas de análise de imagens para detecção de retinopatia diabética

    Get PDF
    Orientadores: Anderson de Rezende Rocha. Jacques WainerTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Retinopatia Diabética (RD) é uma complicação a longo prazo do diabetes e a principal causa de cegueira da população ativa. Consultas regulares são necessárias para diagnosticar a retinopatia em um estágio inicial, permitindo um tratamento com o melhor prognóstico capaz de retardar ou até mesmo impedir a cegueira. Alavancados pela evolução da prevalência do diabetes e pelo maior risco que os diabéticos têm de desenvolver doenças nos olhos, diversos trabalhos com abordagens bem estabelecidas e promissoras vêm sendo desenvolvidos para triagem automática de retinopatia. Entretanto, a maior parte dos trabalhos está focada na detecção de lesões utilizando características visuais particulares de cada tipo de lesão. Além do mais, soluções artesanais para avaliação de necessidade de consulta e de identificação de estágios da retinopatia ainda dependem bastante das lesões, cujo repetitivo procedimento de detecção é complexo e inconveniente, mesmo se um esquema unificado for adotado. O estado da arte para avaliação automatizada de necessidade de consulta é composto por abordagens que propõem uma representação altamente abstrata obtida inteiramente por meio dos dados. Usualmente, estas abordagens recebem uma imagem e produzem uma resposta ¿ que pode ser resultante de um único modelo ou de uma combinação ¿ e não são facilmente explicáveis. Este trabalho objetivou melhorar a detecção de lesões e reforçar decisões relacionadas à necessidade de consulta, fazendo uso de avançadas representações de imagens em duas etapas. Nós também almejamos compor um modelo sofisticado e direcionado pelos dados para triagem de retinopatia, bem como incorporar aprendizado supervisionado de características com representação orientada por mapa de calor, resultando em uma abordagem robusta e ainda responsável para triagem automatizada. Finalmente, tivemos como objetivo a integração das soluções em dispositivos portáteis de captura de imagens de retina. Para detecção de lesões, propusemos abordagens de caracterização de imagens que possibilitem uma detecção eficaz de diferentes tipos de lesões. Nossos principais avanços estão centrados na modelagem de uma nova técnica de codificação para imagens de retina, bem como na preservação de informações no processo de pooling ou agregação das características obtidas. Decidir automaticamente pela necessidade de encaminhamento do paciente a um especialista é uma investigação ainda mais difícil e muito debatida. Nós criamos um método mais simples e robusto para decisões de necessidade de consulta, e que não depende da detecção de lesões. Também propusemos um modelo direcionado pelos dados que melhora significativamente o desempenho na tarefa de triagem da RD. O modelo produz uma resposta confiável com base em respostas (locais e globais), bem como um mapa de ativação que permite uma compreensão de importância de cada pixel para a decisão. Exploramos a metodologia de explicabilidade para criar um descritor local codificado em uma rica representação em nível médio. Os modelos direcionados pelos dados são o estado da arte para triagem de retinopatia diabética. Entretanto, mapas de ativação são essenciais para interpretar o aprendizado em termos de importância de cada pixel e para reforçar pequenas características discriminativas que têm potencial de melhorar o diagnósticoAbstract: Diabetic Retinopathy (DR) is a long-term complication of diabetes and the leading cause of blindness among working-age adults. A regular eye examination is necessary to diagnose DR at an early stage, when it can be treated with the best prognosis and the visual loss delayed or deferred. Leveraged by the continuous expansion of diabetics and by the increased risk that those people have to develop eye diseases, several works with well-established and promising approaches have been proposed for automatic screening. Therefore, most existing art focuses on lesion detection using visual characteristics specific to each type of lesion. Additionally, handcrafted solutions for referable diabetic retinopathy detection and DR stages identification still depend too much on the lesions, whose repetitive detection is complex and cumbersome to implement, even when adopting a unified detection scheme. Current art for automated referral assessment resides on highly abstract data-driven approaches. Usually, those approaches receive an image and spit the response out ¿ that might be resulting from only one model or ensembles ¿ and are not easily explainable. Hence, this work aims at enhancing lesion detection and reinforcing referral decisions with advanced handcrafted two-tiered image representations. We also intended to compose sophisticated data-driven models for referable DR detection and incorporate supervised learning of features with saliency-oriented mid-level image representations to come up with a robust yet accountable automated screening approach. Ultimately, we aimed at integrating our software solutions with simple retinal imaging devices. In the lesion detection task, we proposed advanced handcrafted image characterization approaches to detecting effectively different lesions. Our leading advances are centered on designing a novel coding technique for retinal images and preserving information in the pooling process. Automatically deciding on whether or not the patient should be referred to the ophthalmic specialist is a more difficult, and still hotly debated research aim. We designed a simple and robust method for referral decisions that does not rely upon lesion detection stages. We also proposed a novel and effective data-driven model that significantly improves the performance for DR screening. Our accountable data-driven model produces a reliable (local- and global-) response along with a heatmap/saliency map that enables pixel-based importance comprehension. We explored this methodology to create a local descriptor that is encoded into a rich mid-level representation. Data-driven methods are the state of the art for diabetic retinopathy screening. However, saliency maps are essential not only to interpret the learning in terms of pixel importance but also to reinforce small discriminative characteristics that have the potential to enhance the diagnosticDoutoradoCiência da ComputaçãoDoutor em Ciência da ComputaçãoCAPE

    Développement et validation d’un système automatique de classification de la dégénérescence maculaire liée à l’âge

    Get PDF
    RÉSUMÉ La dégénérescence maculaire liée à l’âge (DMLA) est une des principales causes de déficience visuelle menant à une cécité irréversible chez les personnes âgées dans les pays industrialisés. Cette maladie regroupe une variété d’anomalies touchant la macula, se présentant sous diverses formes. Un des moyens les plus couramment utilisés pour rapidement examiner la rétine est la photographie de fond d’œil. À partir de ces images, il est déjà possible de détecter et de poser un diagnostic sur l’avancée de la maladie. Une classification recommandée pour évaluer la DMLA est la classification simplifiée de l’AREDS qui consiste à diviser la maladie en quatre catégories : non-DMLA, précoce, modérée, et avancée. Cette classification aide à déterminer le traitement spécifique le plus optimal. Elle se base sur des critères quantitatifs mais également qualitatifs, ce qui peut entrainer des variabilités inter- et intra-expert. Avec le vieillissement de la population et le dépistage systématique, le nombre de cas de DMLA à être examinés et le nombre d’images à être analysées est en augmentation rendant ainsi le travail long et laborieux pour les cliniciens. C’est pour cela, que des méthodes automatiques de détection et de classification de la DMLA ont été proposées, afin de rendre le processus rapide et reproductible. Cependant, il n’existe aucune méthode permettant une classification du degré de sévérité de la DMLA qui soit robuste à la qualité de l’image. Ce dernier point est important lorsqu’on travaille dans un contexte de télémédecine. Dans ce projet, nous proposons de développer et valider un système automatique de classification de la DMLA qui soit robuste à la qualité de l’image. Pour ce faire, nous avons d’abord établi une base de données constituée de 159 images, représentant les quatre catégories de l’AREDS et divers niveaux de qualité d’images. L’étiquetage de ces images a été réalisé par un expert en ophtalmologie et nous a servi de référence. Ensuite, une étude sur l’extraction de caractéristiques nous a permis de relever celles qui étaient pertinentes et de configurer les paramètres pour notre application. Nous en avons conclu que les caractéristiques de texture, de couleur et de contexte visuel semblaient les plus intéressantes. Nous avons effectué par après une étape de sélection afin de réduire la dimensionnalité de l’espace des caractéristiques. Cette étape nous a également permis d’évaluer l’importance des différentes caractéristiques lorsqu’elles étaient combinées ensemble.----------ABSTRACT Age-related macular degeneration (AMD) is the leading cause of visual deficiency and legal blindness in the elderly population in industrialized countries. This disease is a group of heterogeneous disorders affecting the macula. For eye examination, a common used modality is the fundus photography because it is fast and non-invasive procedure which may establish a diagnostic on the stage of the disease. A recommended classification for AMD is the simplified classification of AREDS which divides the disease into four categories: non-AMD, early, moderate and advanced. This classification is helpful to determine the optimal and specific treatment. It is based on quantitative criteria but also on qualitative ones, introducing inter- and intra-expert variability. Moreover, with the aging population and systematic screening, more cases of AMD must be examined and more images must be analyzed, rendering this task long and laborious for clinicians. To address this problem, automatic methods for AMD classification were then proposed for a fast and reproducible process. However, there is no method performing AMD severity classification which is robust to image quality. This last part is especially important in a context of telemedicine where the acquisition conditions are various. The aim of this project is to develop and validate an automatic system for AMD classification which is robust to image quality. To do so, we worked with a database of 159 images, representing the different categories at various levels of image quality. The labelling of these images is realized by one expert and served as a reference. A study on feature extraction is carried out to determine relevant features and to set the parameters for this application. We conclude that features based on texture, color and visual context are the most interesting. After, a selection is applied to reduce the dimensionality of features space. This step allows us to evaluate the feature relevance when all the features are combined. It is shown that the local binary patterns applied on the green channel are the most the discriminant features for AMD classification. Finally, different systems for AMD classification were modeled and tested to assess how the proposed method classifies the fundus images into the different categories. The results demonstrated robustness to image quality and also that our method outperforms the methods proposed in the literature. Errors were noted on images presenting diabetic retinopathy, visible choroidal vessels or too much degradation caused by artefacts. In this project, we propose the first AMD severities classification robust to image quality

    A Foundation LAnguage-Image model of the Retina (FLAIR): Encoding expert knowledge in text supervision

    Full text link
    Foundation vision-language models are currently transforming computer vision, and are on the rise in medical imaging fueled by their very promising generalization capabilities. However, the initial attempts to transfer this new paradigm to medical imaging have shown less impressive performances than those observed in other domains, due to the significant domain shift and the complex, expert domain knowledge inherent to medical-imaging tasks. Motivated by the need for domain-expert foundation models, we present FLAIR, a pre-trained vision-language model for universal retinal fundus image understanding. To this end, we compiled 37 open-access, mostly categorical fundus imaging datasets from various sources, with up to 97 different target conditions and 284,660 images. We integrate the expert's domain knowledge in the form of descriptive textual prompts, during both pre-training and zero-shot inference, enhancing the less-informative categorical supervision of the data. Such a textual expert's knowledge, which we compiled from the relevant clinical literature and community standards, describes the fine-grained features of the pathologies as well as the hierarchies and dependencies between them. We report comprehensive evaluations, which illustrate the benefit of integrating expert knowledge and the strong generalization capabilities of FLAIR under difficult scenarios with domain shifts or unseen categories. When adapted with a lightweight linear probe, FLAIR outperforms fully-trained, dataset-focused models, more so in the few-shot regimes. Interestingly, FLAIR outperforms by a large margin more generalist, larger-scale image-language models, which emphasizes the potential of embedding experts' domain knowledge and the limitations of generalist models in medical imaging.Comment: The pre-trained model is available at: https://github.com/jusiro/FLAI
    corecore