93 research outputs found

    A deep learning framework for the detection and quantification of drusen and reticular pseudodrusen on optical coherence tomography

    Full text link
    Purpose - To develop and validate a deep learning (DL) framework for the detection and quantification of drusen and reticular pseudodrusen (RPD) on optical coherence tomography scans. Design - Development and validation of deep learning models for classification and feature segmentation. Methods - A DL framework was developed consisting of a classification model and an out-of-distribution (OOD) detection model for the identification of ungradable scans; a classification model to identify scans with drusen or RPD; and an image segmentation model to independently segment lesions as RPD or drusen. Data were obtained from 1284 participants in the UK Biobank (UKBB) with a self-reported diagnosis of age-related macular degeneration (AMD) and 250 UKBB controls. Drusen and RPD were manually delineated by five retina specialists. The main outcome measures were sensitivity, specificity, area under the ROC curve (AUC), kappa, accuracy and intraclass correlation coefficient (ICC). Results - The classification models performed strongly at their respective tasks (0.95, 0.93, and 0.99 AUC, respectively, for the ungradable scans classifier, the OOD model, and the drusen and RPD classification model). The mean ICC for drusen and RPD area vs. graders was 0.74 and 0.61, respectively, compared with 0.69 and 0.68 for intergrader agreement. FROC curves showed that the model's sensitivity was close to human performance. Conclusions - The models achieved high classification and segmentation performance, similar to human performance. Application of this robust framework will further our understanding of RPD as a separate entity from drusen in both research and clinical settings.Comment: 26 pages, 7 figure

    Deep learning in ophthalmology: The technical and clinical considerations

    Get PDF
    The advent of computer graphic processing units, improvement in mathematical models and availability of big data has allowed artificial intelligence (AI) using machine learning (ML) and deep learning (DL) techniques to achieve robust performance for broad applications in social-media, the internet of things, the automotive industry and healthcare. DL systems in particular provide improved capability in image, speech and motion recognition as well as in natural language processing. In medicine, significant progress of AI and DL systems has been demonstrated in image-centric specialties such as radiology, dermatology, pathology and ophthalmology. New studies, including pre-registered prospective clinical trials, have shown DL systems are accurate and effective in detecting diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), retinopathy of prematurity, refractive error and in identifying cardiovascular risk factors and diseases, from digital fundus photographs. There is also increasing attention on the use of AI and DL systems in identifying disease features, progression and treatment response for retinal diseases such as neovascular AMD and diabetic macular edema using optical coherence tomography (OCT). Additionally, the application of ML to visual fields may be useful in detecting glaucoma progression. There are limited studies that incorporate clinical data including electronic health records, in AL and DL algorithms, and no prospective studies to demonstrate that AI and DL algorithms can predict the development of clinical eye disease. This article describes global eye disease burden, unmet needs and common conditions of public health importance for which AI and DL systems may be applicable. Technical and clinical aspects to build a DL system to address those needs, and the potential challenges for clinical adoption are discussed. AI, ML and DL will likely play a crucial role in clinical ophthalmology practice, with implications for screening, diagnosis and follow up of the major causes of vision impairment in the setting of ageing populations globally

    Multi-scale convolutional neural network for automated AMD classification using retinal OCT images

    Get PDF
    BACKGROUND AND OBJECTIVE: Age-related macular degeneration (AMD) is the most common cause of blindness in developed countries, especially in people over 60 years of age. The workload of specialists and the healthcare system in this field has increased in recent years mainly due to three reasons: 1) increased use of retinal optical coherence tomography (OCT) imaging technique, 2) prevalence of population aging worldwide, and 3) chronic nature of AMD. Recent advancements in the field of deep learning have provided a unique opportunity for the development of fully automated diagnosis frameworks. Considering the presence of AMD-related retinal pathologies in varying sizes in OCT images, our objective was to propose a multi-scale convolutional neural network (CNN) that can capture inter-scale variations and improve performance using a feature fusion strategy across convolutional blocks. METHODS: Our proposed method introduces a multi-scale CNN based on the feature pyramid network (FPN) structure. This method is used for the reliable diagnosis of normal and two common clinical characteristics of dry and wet AMD, namely drusen and choroidal neovascularization (CNV). The proposed method is evaluated on the national dataset gathered at Hospital (NEH) for this study, consisting of 12649 retinal OCT images from 441 patients, and the UCSD public dataset, consisting of 108312 OCT images from 4686 patients. RESULTS: Experimental results show the superior performance of our proposed multi-scale structure over several well-known OCT classification frameworks. This feature combination strategy has proved to be effective on all tested backbone models, with improvements ranging from 0.4% to 3.3%. In addition, gradual learning has proved to be effective in improving performance in two consecutive stages. In the first stage, the performance was boosted from 87.2%±2.5% to 92.0%±1.6% using pre-trained ImageNet weights. In the second stage, another performance boost from 92.0%±1.6% to 93.4%±1.4% was observed as a result of fine-tuning the previous model on the UCSD dataset. Lastly, generating heatmaps provided additional proof for the effectiveness of our multi-scale structure, enabling the detection of retinal pathologies appearing in different sizes. CONCLUSION: The promising quantitative results of the proposed architecture, along with qualitative evaluations through generating heatmaps, prove the suitability of the proposed method to be used as a screening tool in healthcare centers assisting ophthalmologists in making better diagnostic decisions

    Development of Novel Diagnostic Tools for Dry Eye Disease using Infrared Meibography and In Vivo Confocal Microscopy

    Get PDF
    Dry eye disease (DED) is a multifactorial disease of the ocular surface where tear film instability, hyperosmolarity, neurosensory abnormalities, meibomian gland dysfunction, ocular surface inflammation and damage play a dedicated etiological role. Estimated 5 to 50% of the world population in different demographic locations, age and gender are currently affected by DED. The risk and occurrence of DED increases at a significant rate with age, which makes dry eye a major growing public health issue. DED not only impacts the patient’s quality of vision and life, but also creates a socio-economic burden of millions of euros per year. DED diagnosis and monitoring can be a challenging task in clinical practice due to the multifactorial nature and the poor correlation between signs and symptoms. Key clinical diagnostic tests and techniques for DED diagnosis include tearfilm break up time, tear secretion – Schirmer’s test, ocular surface staining, measurement of osmolarity, conjunctival impression cytology. However, these clinical diagnostic techniques are subjective, selective, require contact, and are unpleasant for the patient’s eye. Currently, new advances in different state-of-the-art imaging modalities provide non-invasive, non- or semi-contact, and objective parameters that enable objective evaluation of DED diagnosis. Among the different and constantly evolving imaging modalities, some techniques are developed to assess morphology and function of meibomian glands, and microanatomy and alteration of the different ocular surface tissues such as corneal nerves, immune cells, microneuromas, and conjunctival blood vessels. These clinical parameters cannot be measured by conventional clinical assessment alone. The combination of these imaging modalities with clinical feedback provides unparalleled quantification information of the dynamic properties and functional parameters of different ocular surface tissues. Moreover, image-based biomarkers provide objective, specific, and non / marginal contact diagnosis, which is faster and less unpleasant to the patient’s eye than the clinical assessment techniques. The aim of this PhD thesis was to introduced deep learning-based novel computational methods to segment and quantify meibomian glands (both upper and lower eyelids), corneal nerves, and dendritic cells. The developed methods used raw images, directly export from the clinical devices without any image pre-processing to generate segmentation masks. Afterward, it provides fully automatic morphometric quantification parameters for more reliable disease diagnosis. Noteworthily, the developed methods provide complete segmentation and quantification information for faster disease characterization. Thus, the developed methods are the first methods (especially for meibomian gland and dendritic cells) to provide complete morphometric analysis. Taken together, we have developed deep learning based automatic system to segment and quantify different ocular surface tissues related to DED namely, meibomian gland, corneal nerves, and dendritic cells to provide reliable and faster disease characterization. The developed system overcomes the current limitations of subjective image analysis and enables precise, accurate, reliable, and reproducible ocular surface tissue analysis. These systems have the potential to make an impact clinically and in the research environment by specifying faster disease diagnosis, facilitating new drug development, and standardizing clinical trials. Moreover, it will allow both researcher and clinicians to analyze meibomian glands, corneal nerves, and dendritic cells more reliably while reducing the time needed to analyze patient images significantly. Finally, the methods developed in this research significantly increase the efficiency of evaluating clinical images, thereby supporting and potentially improving diagnosis and treatment of ocular surface disease

    A Deep Learning Framework for the Detection and Quantification of Reticular Pseudodrusen and Drusen on Optical Coherence Tomography

    Get PDF
    PURPOSE: The purpose of this study was to develop and validate a deep learning (DL) framework for the detection and quantification of reticular pseudodrusen (RPD) and drusen on optical coherence tomography (OCT) scans. METHODS: A DL framework was developed consisting of a classification model and an out-of-distribution (OOD) detection model for the identification of ungradable scans; a classification model to identify scans with drusen or RPD; and an image segmentation model to independently segment lesions as RPD or drusen. Data were obtained from 1284 participants in the UK Biobank (UKBB) with a self-reported diagnosis of age-related macular degeneration (AMD) and 250 UKBB controls. Drusen and RPD were manually delineated by five retina specialists. The main outcome measures were sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), kappa, accuracy, intraclass correlation coefficient (ICC), and free-response receiver operating characteristic (FROC) curves. RESULTS: The classification models performed strongly at their respective tasks (0.95, 0.93, and 0.99 AUC, respectively, for the ungradable scans classifier, the OOD model, and the drusen and RPD classification models). The mean ICC for the drusen and RPD area versus graders was 0.74 and 0.61, respectively, compared with 0.69 and 0.68 for intergrader agreement. FROC curves showed that the model's sensitivity was close to human performance. CONCLUSIONS: The models achieved high classification and segmentation performance, similar to human performance. TRANSLATIONAL RELEVANCE: Application of this robust framework will further our understanding of RPD as a separate entity from drusen in both research and clinical settings

    Explainable AI for retinal OCT diagnosis

    Get PDF
    Artificial intelligence methods such as deep learning are leading to great progress in complex tasks that are usually associated with human intelligence and experience. Deep learning models have matched if not bettered human performance for medical diagnosis tasks including retinal diagnosis. Given a sufficient amount of data and computational resources, these models can perform classification and segmentation as well as related tasks such as image quality improvement. The adoption of these systems in actual healthcare centers has been limited due to the lack of reasoning behind their decisions. This black box nature along with upcoming regulations for transparency and privacy exacerbates the ethico-legal challenges faced by deep learning systems. The attribution methods are a way to explain the decisions of a deep learning model by generating a heatmap of the features which have the most contribution to the model's decision. These are generally compared in quantitative terms for standard machine learning datasets. However, the ability of these methods to generalize to specific data distributions such as retinal OCT has not been thoroughly evaluated. In this thesis, multiple attribution methods to explain the decisions of deep learning models for retinal diagnosis are compared. It is evaluated if the methods considered the best for explainability outperform the methods with a relatively simpler theoretical background. A review of current deep learning models for retinal diagnosis and the state-of-the-art explainability methods for medical diagnosis is provided. A commonly used deep learning model is trained on a large public dataset of OCT images and the attributions are generated using various methods. A quantitative and qualitative comparison of these approaches is done using several performance metrics and a large panel of experienced retina specialists. The initial quantitative metrics include the runtime of the method, RMSE, and Spearman's rank correlation for a single instance of the model. Later, two stronger metrics - robustness and sensitivity are presented. These evaluate the consistency amongst different instances of the same model and the ability to highlight the features with the most effect on the model output respectively. Similarly, the initial qualitative analysis involves the comparison between the heatmaps and a clinician's markings in terms of cosine similarity. Next, a panel of 14 clinicians rated the heatmaps of each method. Their subjective feedback, reasons for preference, and general feedback about using such a system are also documented. It is concluded that the explainability methods can make the decision process of deep learning models more transparent and the choice of the method should account for the preference of the domain experts. There is a high degree of acceptance from the clinicians surveyed for using such systems. The future directions regarding system improvements and enhancements are also discussed

    Novel segmentation algorithm for high-throughput analysis of spectral domain-optical coherence tomography imaging of teleost retina

    Get PDF
    Aim Spectral Domain-Optical Coherence Tomography (SD-OCT) has become an essential tool to assess the health of ocular tissues in live subjects. The processing of SD-OCT images, in particular from non-mammalian species, is a labour-intensive manual process due to the lack of access to analytical programs. The work presented herein describes the development and implementation of a novel computer algorithm for quantitative analysis of SD-OCT images of live teleost eyes. We hypothesized that this algorithm, in comparison to manual segmentation of SD-OCT images, will allow more precise measurement, with significantly higher throughput capacity, of retinal architecture in live teleost ocular tissue. Methods Automated segmentation processing of SD-OCT images of the retinal layers was developed using a novel algorithm based on thresholding, which operates on the pixel values contained in an image. The algorithm measured the thickness characteristic of the retina present in the input dataset to provide increased accuracy and repeatability of SD-OCT analysis over manual measurements. The program was also designed to allow adjustments of the thresholding variables to suit any specific image set. A heat map software was created alongside the algorithm to plot the SD-OCT image measurements as a colour gradient. Results Automated segmentation analysis of the retinal layers from SD-OCT images enabled analysis of a large volume of imaging data of teleost ocular structures in a short time. The algorithm was just as accurate when compared to manual measurements and provided repeatability as measurements could be quickly reassessed to confirm previously determined results. This is the case as the algorithm can generate hundreds of retinal thickness measurements per image for a large number of images for a given dataset. This algorithm can be deemed as repeatable as each input will always produce the same output due to the thresholding methods used. This is the case as thresholding is a finite mathematical process to determine a range of values. The measurements produced from this assessment were represented by a heat map software that directly converted the measurements taken from each processed image to represent the changes in thickness across the whole retinal scan. Conclusions Our work addresses the need for accurate and high-throughput SDOCT data analysis for the retinal tissues of teleosts where previously no such program existed. Our heat mapping software enables the visualization of the retinal thickness across the whole retinal scans facilitating the comparison of specimens and localization of areas of interest. Our novel algorithm provides the first tools to analyze SD-OCT scans of non-mammalian species at a faster rate than manual analysis, increasing the potential of future research output. The adaptability of our programs makes them potentially suitable for the analysis of SD-OCT scans from other non-mammalian species

    Harmonic Analysis and Machine Learning

    Get PDF
    This dissertation considers data representations that lie at the interesection of harmonic analysis and neural networks. The unifying theme of this work is the goal for robust and reliable machine learning. Our specific contributions include a new variant of scattering transforms based on a Haar-type directional wavelet, a new study of deep neural network instability in the context of remote sensing problems, and new empirical studies of biomedical applications of neural networks

    Assessing neurodegeneration of the retina and brain with ultra-widefield retinal imaging

    Get PDF
    The eye is embryologically, physiologically and anatomically linked to the brain. Emerging evidence suggests that neurodegenerative diseases, such as Alzheimer’s disease (AD), manifest in the retina. Retinal imaging is a quick, non-invasive method to view the retina and its microvasculature. Features such as blood vessel calibre, tortuosity and complexity of the vascular structure (measured through fractal analysis) are thought to reflect microvascular health and have been found to associate with clinical signs of hypertension, diabetes, cardiovascular disease and cognitive decline. Small deposits of acellular debris called drusen in the peripheral retina have also been linked with AD where histological studies show they can contain amyloid beta, a hallmark of AD. Age-related macular degeneration (AMD) is a neurodegenerative disorder of the retina and a leading cause of irreversible vision loss in the ageing population. Increasing number and size of drusen is a characteristic of AMD disease progression. Ultra-widefield (UWF) retinal imaging with a scanning laser ophthalmoscope captures up to 80% of the retina in a single acquisition allowing a larger area of the retina to be assessed for signs of neurodegeneration than is possible with a conventional fundus camera, particularly the periphery. Quantification of changes to the microvasculature and drusen load could be used to derive early biomarkers of diseases that have vascular and neurodegenerative components such as AD and other forms of dementia.Manually grading drusen in UWF images is a difficult, subjective and a time-consuming process because the area imaged is large (around 700mm2) and drusen appear as small spots ( 0.8 and < 0.9), achieving AUC 0.55-0.59, 0.78-0.82 and 0.82-0.85 in the central, perimacular and peripheral zones, respectively. Measurements of the retinal vasculature appearing in UWF images of cognitively healthy (CH) individuals and patients diagnosed with mild cognitive impairment (MCI) and AD were obtained using a previously established pipeline. Following data cleaning, vascular measures were compared using multivariate generalised estimation equations (GEE), which accounts for the correlation between eyes of individuals with correction for confounders (e.g. age). The vascular measures were repeated for a subset of images and analysed using GEE to assess the repeatability of the results. When comparing AD with CH, the analysis showed a statistically significant difference between measurements of arterioles in the inferonasal quadrant, but fractal analysis produced inconsistent results due to differences in the area sampled in which the fractal dimension was calculated.When looking at drusen load, there was a higher abundance of drusen in the inferonasal region of the peripheral retina in the CH and AD compared to the MCI group. Using GEE analysis, there was evidence of a significant difference in drusen count when comparing MCI to CH (p = 0.02) and MCI to AD (p = 0.03), but no evidence of a difference when comparing AD to CH. However, given the low sensitivity of the system (partly the result of only moderate agreement between human observers), there will be a large proportion of drusen that are not detected giving an under estimation of the true amount of drusen present in an image. Overcoming this limitation will involve training the system using larger datasets and annotations from additional observers to create a more consistent reference standard. Further validation could then be performed in the future to determine if these promising pilot results persist, leading to candidate retinal biomarkers of AD
    • …
    corecore