83 research outputs found

    Neuropathy Classification of Corneal Nerve Images Using Artificial Intelligence

    Get PDF
    Nerve variations in the human cornea have been associated with alterations in the neuropathy state of a patient suffering from chronic diseases. For some diseases, such as diabetes, detection of neuropathy prior to visible symptoms is important, whereas for others, such as multiple sclerosis, early prediction of disease worsening is crucial. As current methods fail to provide early diagnosis of neuropathy, in vivo corneal confocal microscopy enables very early insight into the nerve damage by illuminating and magnifying the human cornea. This non-invasive method captures a sequence of images from the corneal sub-basal nerve plexus. Current practices of manual nerve tracing and classification impede the advancement of medical research in this domain. Since corneal nerve analysis for neuropathy is in its initial stages, there is a dire need for process automation. To address this limitation, we seek to automate the two stages of this process: nerve segmentation and neuropathy classification of images. For nerve segmentation, we compare the performance of two existing solutions on multiple datasets to select the appropriate method and proceed to the classification stage. Consequently, we approach neuropathy classification of the images through artificial intelligence using Adaptive Neuro-Fuzzy Inference System, Support Vector Machines, Naïve Bayes and k-nearest neighbors. We further compare the performance of machine learning classifiers with deep learning. We ascertained that nerve segmentation using convolutional neural networks provided a significant improvement in sensitivity and false negative rate by at least 5% over the state-of-the-art software. For classification, ANFIS yielded the best classification accuracy of 93.7% compared to other classifiers. Furthermore, for this problem, machine learning approaches performed better in terms of classification accuracy than deep learning

    CS-Net: Channel and Spatial Attention Network for Curvilinear Structure Segmentation

    Get PDF
    The detection of curvilinear structures in medical images, e.g., blood vessels or nerve fibers, is important in aiding management of many diseases. In this work, we propose a general unifying curvilinear structure segmentation network that works on different medical imaging modalities: optical coherence tomography angiography (OCT-A), color fundus image, and corneal confocal microscopy (CCM). Instead of the U-Net based convolutional neural network, we propose a novel network (CS-Net) which includes a self-attention mechanism in the encoder and decoder. Two types of attention modules are utilized - spatial attention and channel attention, to further integrate local features with their global dependencies adaptively. The proposed network has been validated on five datasets: two color fundus datasets, two corneal nerve datasets and one OCT-A dataset. Experimental results show that our method outperforms state-of-the-art methods, for example, sensitivities of corneal nerve fiber segmentation were at least 2% higher than the competitors. As a complementary output, we made manual annotations of two corneal nerve datasets which have been released for public access

    Classification of Corneal Nerve Images Using Machine Learning Techniques

    Get PDF
    Recent research shows that small nerve fiber damage is an early detector of neuropathy. These small nerve fibers are present in the human cornea and can be visualized through the use of a corneal confocal microscope. A series of images can be acquired from the subbasal nerve plexus of the cornea. Before the images can be quantified for nerve loss, a human expert manually traces the nerves in the image and then classifies the image as having neuropathy or not. Some nerve tracing algorithms are available in the literature, but none of them are reported as being used in clinical practice. An alternate practice is to visually classify the image for neuropathy without quantification. In this paper, we evaluate the potential of various machine learning techniques for automating corneal nerve image classification. First, the images are down-sampled using discrete wavelet transform, filtering and a number of morphological operations. The resulting binary image is used for extracting characteristic features of the image. This is followed by training the classifier on the extracted features. The trained classifier is then used for predicting the state of the nerves in the images. Our experiments yield a classification accuracy of 0.91 reflecting the effectiveness of the proposed method

    DeepGrading: Deep Learning Grading of Corneal Nerve Tortuosity

    Get PDF
    Accurate estimation and quantification of the corneal nerve fiber tortuosity in corneal confocal microscopy (CCM) is of great importance for disease understanding and clinical decision-making. However, the grading of corneal nerve tortuosity remains a great challenge due to the lack of agreements on the definition and quantification of tortuosity. In this paper, we propose a fully automated deep learning method that performs image-level tortuosity grading of corneal nerves, which is based on CCM images and segmented corneal nerves to further improve the grading accuracy with interpretability principles. The proposed method consists of two stages: 1) A pre-trained feature extraction backbone over ImageNet is fine-tuned with a proposed novel bilinear attention (BA) module for the prediction of the regions of interest (ROIs) and coarse grading of the image. The BA module enhances the ability of the network to model long-range dependencies and global contexts of nerve fibers by capturing second-order statistics of high-level features. 2) An auxiliary tortuosity grading network (AuxNet) is proposed to obtain an auxiliary grading over the identified ROIs, enabling the coarse and additional gradings to be finally fused together for more accurate final results. The experimental results show that our method surpasses existing methods in tortuosity grading, and achieves an overall accuracy of 85.64% in four-level classification. We also validate it over a clinical dataset, and the statistical analysis demonstrates a significant difference of tortuosity levels between healthy control and diabetes group. We have released a dataset with 1500 CCM images and their manual annotations of four tortuosity levels for public access. The code is available at: https://github.com/iMED-Lab/TortuosityGrading

    Development of Novel Diagnostic Tools for Dry Eye Disease using Infrared Meibography and In Vivo Confocal Microscopy

    Get PDF
    Dry eye disease (DED) is a multifactorial disease of the ocular surface where tear film instability, hyperosmolarity, neurosensory abnormalities, meibomian gland dysfunction, ocular surface inflammation and damage play a dedicated etiological role. Estimated 5 to 50% of the world population in different demographic locations, age and gender are currently affected by DED. The risk and occurrence of DED increases at a significant rate with age, which makes dry eye a major growing public health issue. DED not only impacts the patient’s quality of vision and life, but also creates a socio-economic burden of millions of euros per year. DED diagnosis and monitoring can be a challenging task in clinical practice due to the multifactorial nature and the poor correlation between signs and symptoms. Key clinical diagnostic tests and techniques for DED diagnosis include tearfilm break up time, tear secretion – Schirmer’s test, ocular surface staining, measurement of osmolarity, conjunctival impression cytology. However, these clinical diagnostic techniques are subjective, selective, require contact, and are unpleasant for the patient’s eye. Currently, new advances in different state-of-the-art imaging modalities provide non-invasive, non- or semi-contact, and objective parameters that enable objective evaluation of DED diagnosis. Among the different and constantly evolving imaging modalities, some techniques are developed to assess morphology and function of meibomian glands, and microanatomy and alteration of the different ocular surface tissues such as corneal nerves, immune cells, microneuromas, and conjunctival blood vessels. These clinical parameters cannot be measured by conventional clinical assessment alone. The combination of these imaging modalities with clinical feedback provides unparalleled quantification information of the dynamic properties and functional parameters of different ocular surface tissues. Moreover, image-based biomarkers provide objective, specific, and non / marginal contact diagnosis, which is faster and less unpleasant to the patient’s eye than the clinical assessment techniques. The aim of this PhD thesis was to introduced deep learning-based novel computational methods to segment and quantify meibomian glands (both upper and lower eyelids), corneal nerves, and dendritic cells. The developed methods used raw images, directly export from the clinical devices without any image pre-processing to generate segmentation masks. Afterward, it provides fully automatic morphometric quantification parameters for more reliable disease diagnosis. Noteworthily, the developed methods provide complete segmentation and quantification information for faster disease characterization. Thus, the developed methods are the first methods (especially for meibomian gland and dendritic cells) to provide complete morphometric analysis. Taken together, we have developed deep learning based automatic system to segment and quantify different ocular surface tissues related to DED namely, meibomian gland, corneal nerves, and dendritic cells to provide reliable and faster disease characterization. The developed system overcomes the current limitations of subjective image analysis and enables precise, accurate, reliable, and reproducible ocular surface tissue analysis. These systems have the potential to make an impact clinically and in the research environment by specifying faster disease diagnosis, facilitating new drug development, and standardizing clinical trials. Moreover, it will allow both researcher and clinicians to analyze meibomian glands, corneal nerves, and dendritic cells more reliably while reducing the time needed to analyze patient images significantly. Finally, the methods developed in this research significantly increase the efficiency of evaluating clinical images, thereby supporting and potentially improving diagnosis and treatment of ocular surface disease

    A review of artificial intelligence applications in anterior segment ocular diseases

    Get PDF
    Background: Artificial intelligence (AI) has great potential for interpreting and analyzing images and processing large amounts of data. There is a growing interest in investigating the applications of AI in anterior segment ocular diseases. This narrative review aims to assess the use of different AI-based algorithms for diagnosing and managing anterior segment entities. Methods: We reviewed the applications of different AI-based algorithms in the diagnosis and management of anterior segment entities, including keratoconus, corneal dystrophy, corneal grafts, corneal transplantation, refractive surgery, pterygium, infectious keratitis, cataracts, and disorders of the corneal nerves, conjunctiva, tear film, anterior chamber angle, and iris. The English-language databases PubMed/MEDLINE, Scopus, and Google Scholar were searched using the following keywords: artificial intelligence, deep learning, machine learning, neural network, anterior eye segment diseases, corneal disease, keratoconus, dry eye, refractive surgery, pterygium, infectious keratitis, anterior chamber, and cataract. Relevant articles were compared based on the use of AI models in the diagnosis and treatment of anterior segment diseases. Furthermore, we prepared a summary of the diagnostic performance of the AI-based methods for anterior segment ocular entities. Results: Various AI methods based on deep and machine learning can analyze data obtained from corneal imaging modalities with acceptable diagnostic performance. Currently, complicated and time-consuming manual methods are available for diagnosing and treating eye diseases. However, AI methods could save time and prevent vision impairment in eyes with anterior segment diseases. Because many anterior segment diseases can cause irreversible complications and even vision loss, sufficient confidence in the results obtained from the designed model is crucial for decision-making by experts. Conclusions: AI-based models could be used as surrogates for analyzing manual data with improveddiagnostic performance. These methods could be reliable tools for diagnosing and managing anterior segmentocular diseases in the near future in remote areas. It is expected that future studies can design algorithms thatuse less data in a multitasking manner for the detection and management of anterior segment diseases

    A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

    Full text link
    Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02±0.684.02 \pm 0.68 dB (single-frame) to 8.14±1.038.14 \pm 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50±0.563.50 \pm 0.56 (single-frame) to 7.63±1.817.63 \pm 1.81 (denoised). The MSSIM increased from 0.13±0.020.13 \pm 0.02 (single frame) to 0.65±0.030.65 \pm 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort
    corecore