147 research outputs found

    Automatic Classification of Bright Retinal Lesions via Deep Network Features

    Full text link
    The diabetic retinopathy is timely diagonalized through color eye fundus images by experienced ophthalmologists, in order to recognize potential retinal features and identify early-blindness cases. In this paper, it is proposed to extract deep features from the last fully-connected layer of, four different, pre-trained convolutional neural networks. These features are then feeded into a non-linear classifier to discriminate three-class diabetic cases, i.e., normal, exudates, and drusen. Averaged across 1113 color retinal images collected from six publicly available annotated datasets, the deep features approach perform better than the classical bag-of-words approach. The proposed approaches have an average accuracy between 91.23% and 92.00% with more than 13% improvement over the traditional state of art methods.Comment: Preprint submitted to Journal of Medical Imaging | SPIE (Tue, Jul 28, 2017

    Multilevel Deep Feature Generation Framework for Automated Detection of Retinal Abnormalities Using OCT Images.

    Full text link
    Optical coherence tomography (OCT) images coupled with many learning techniques have been developed to diagnose retinal disorders. This work aims to develop a novel framework for extracting deep features from 18 pre-trained convolutional neural networks (CNN) and to attain high performance using OCT images. In this work, we have developed a new framework for automated detection of retinal disorders using transfer learning. This model consists of three phases: deep fused and multilevel feature extraction, using 18 pre-trained networks and tent maximal pooling, feature selection with ReliefF, and classification using the optimized classifier. The novelty of this proposed framework is the feature generation using widely used CNNs and to select the most suitable features for classification. The extracted features using our proposed intelligent feature extractor are fed to iterative ReliefF (IRF) to automatically select the best feature vector. The quadratic support vector machine (QSVM) is utilized as a classifier in this work. We have developed our model using two public OCT image datasets, and they are named database 1 (DB1) and database 2 (DB2). The proposed framework can attain 97.40% and 100% classification accuracies using the two OCT datasets, DB1 and DB2, respectively. These results illustrate the success of our model

    The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey.

    Get PDF
    Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications

    Multi-scale convolutional neural network for automated AMD classification using retinal OCT images

    Get PDF
    BACKGROUND AND OBJECTIVE: Age-related macular degeneration (AMD) is the most common cause of blindness in developed countries, especially in people over 60 years of age. The workload of specialists and the healthcare system in this field has increased in recent years mainly due to three reasons: 1) increased use of retinal optical coherence tomography (OCT) imaging technique, 2) prevalence of population aging worldwide, and 3) chronic nature of AMD. Recent advancements in the field of deep learning have provided a unique opportunity for the development of fully automated diagnosis frameworks. Considering the presence of AMD-related retinal pathologies in varying sizes in OCT images, our objective was to propose a multi-scale convolutional neural network (CNN) that can capture inter-scale variations and improve performance using a feature fusion strategy across convolutional blocks. METHODS: Our proposed method introduces a multi-scale CNN based on the feature pyramid network (FPN) structure. This method is used for the reliable diagnosis of normal and two common clinical characteristics of dry and wet AMD, namely drusen and choroidal neovascularization (CNV). The proposed method is evaluated on the national dataset gathered at Hospital (NEH) for this study, consisting of 12649 retinal OCT images from 441 patients, and the UCSD public dataset, consisting of 108312 OCT images from 4686 patients. RESULTS: Experimental results show the superior performance of our proposed multi-scale structure over several well-known OCT classification frameworks. This feature combination strategy has proved to be effective on all tested backbone models, with improvements ranging from 0.4% to 3.3%. In addition, gradual learning has proved to be effective in improving performance in two consecutive stages. In the first stage, the performance was boosted from 87.2%±2.5% to 92.0%±1.6% using pre-trained ImageNet weights. In the second stage, another performance boost from 92.0%±1.6% to 93.4%±1.4% was observed as a result of fine-tuning the previous model on the UCSD dataset. Lastly, generating heatmaps provided additional proof for the effectiveness of our multi-scale structure, enabling the detection of retinal pathologies appearing in different sizes. CONCLUSION: The promising quantitative results of the proposed architecture, along with qualitative evaluations through generating heatmaps, prove the suitability of the proposed method to be used as a screening tool in healthcare centers assisting ophthalmologists in making better diagnostic decisions

    Weakly-supervised detection of AMD-related lesions in color fundus images using explainable deep learning

    Get PDF
    [Abstract]: Background and Objectives: Age-related macular degeneration (AMD) is a degenerative disorder affecting the macula, a key area of the retina for visual acuity. Nowadays, AMD is the most frequent cause of blindness in developed countries. Although some promising treatments have been proposed that effectively slow down its development, their effectiveness significantly diminishes in the advanced stages. This emphasizes the importance of large-scale screening programs for early detection. Nevertheless, implementing such programs for a disease like AMD is usually unfeasible, since the population at risk is large and the diagnosis is challenging. For the characterization of the disease, clinicians have to identify and localize certain retinal lesions. All this motivates the development of automatic diagnostic methods. In this sense, several works have achieved highly positive results for AMD detection using convolutional neural networks (CNNs). However, none of them incorporates explainability mechanisms linking the diagnosis to its related lesions to help clinicians to better understand the decisions of the models. This is specially relevant, since the absence of such mechanisms limits the application of automatic methods in the clinical practice. In that regard, we propose an explainable deep learning approach for the diagnosis of AMD via the joint identification of its associated retinal lesions. Methods: In our proposal, a CNN with a custom architectural setting is trained end-to-end for the joint identification of AMD and its associated retinal lesions. With the proposed setting, the lesion identification is directly derived from independent lesion activation maps; then, the diagnosis is obtained from the identified lesions. The training is performed end-to-end using image-level labels. Thus, lesion-specific activation maps are learned in a weakly-supervised manner. The provided lesion information is of high clinical interest, as it allows clinicians to assess the developmental stage of the disease. Additionally, the proposed approach allows to explain the diagnosis obtained by the models directly from the identified lesions and their corresponding activation maps. The training data necessary for the approach can be obtained without much extra work on the part of clinicians, since the lesion information is habitually present in medical records. This is an important advantage over other methods, including fully-supervised lesion segmentation methods, which require pixel-level labels whose acquisition is arduous. Results: The experiments conducted in 4 different datasets demonstrate that the proposed approach is able to identify AMD and its associated lesions with satisfactory performance. Moreover, the evaluation of the lesion activation maps shows that the models trained using the proposed approach are able to identify the pathological areas within the image and, in most cases, to correctly determine to which lesion they correspond. Conclusions: The proposed approach provides meaningful information—lesion identification and lesion activation maps—that conveniently explains and complements the diagnosis, and is of particular interest to clinicians for the diagnostic process. Moreover, the data needed to train the networks using the proposed approach is commonly easy to obtain, what represents an important advantage in fields with particularly scarce data, such as medical imaging.Xunta de Galicia; ED481B-2022-025Xunta de Galicia; ED431C 2020/24Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED481A 2021/140Xunta de Galicia; ED431G 2019/01This work was funded by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project; Ministerio de Ciencia e Innovación, Government of Spain, through RTI2018-095894-B-I00 and PID2019-108435RB-I00 research projects; Axencia Galega de Innovación (GAIN), Xunta de Galicia, ref. IN845D 2020/38; Conselleria de Cultura, Educación e Universidade, Xunta de Galicia, through Grupos de Referencia Competitiva, ref. ED431C 2020/24, the predoctoral grant ref. ED481A 2021/140, and the postdoctoral grant ref. ED481B-2022-025; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, is funded by Conselleria de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaria Xeral de Universidades (20%)
    corecore