3,123 research outputs found

    Detection of macular atrophy in age-related macular degeneration aided by artificial intelligence

    Get PDF
    INTRODUCTION: Age-related macular degeneration (AMD) is a leading cause of irreversible visual impairment worldwide. The endpoint of AMD, both in its dry or wet form, is macular atrophy (MA) which is characterized by the permanent loss of the RPE and overlying photoreceptors either in dry AMD or in wet AMD. A recognized unmet need in AMD is the early detection of MA development. AREAS COVERED: Artificial Intelligence (AI) has demonstrated great impact in detection of retinal diseases, especially with its robust ability to analyze big data afforded by ophthalmic imaging modalities, such as color fundus photography (CFP), fundus autofluorescence (FAF), near-infrared reflectance (NIR), and optical coherence tomography (OCT). Among these, OCT has been shown to have great promise in identifying early MA using the new criteria in 2018. EXPERT OPINION: There are few studies in which AI-OCT methods have been used to identify MA; however, results are very promising when compared to other imaging modalities. In this paper, we review the development and advances of ophthalmic imaging modalities and their combination with AI technology to detect MA in AMD. In addition, we emphasize the application of AI-OCT as an objective, cost-effective tool for the early detection and monitoring of the progression of MA in AMD

    Detection of macular atrophy in age-related macular degeneration aided by artificial intelligence

    Get PDF
    INTRODUCTION: Age-related macular degeneration (AMD) is a leading cause of irreversible visual impairment worldwide. The endpoint of AMD, both in its dry or wet form, is macular atrophy (MA) which is characterized by the permanent loss of the RPE and overlying photoreceptors either in dry AMD or in wet AMD. A recognized unmet need in AMD is the early detection of MA development. AREAS COVERED: Artificial Intelligence (AI) has demonstrated great impact in detection of retinal diseases, especially with its robust ability to analyze big data afforded by ophthalmic imaging modalities, such as color fundus photography (CFP), fundus autofluorescence (FAF), near-infrared reflectance (NIR), and optical coherence tomography (OCT). Among these, OCT has been shown to have great promise in identifying early MA using the new criteria in 2018. EXPERT OPINION: There are few studies in which AI-OCT methods have been used to identify MA; however, results are very promising when compared to other imaging modalities. In this paper, we review the development and advances of ophthalmic imaging modalities and their combination with AI technology to detect MA in AMD. In addition, we emphasize the application of AI-OCT as an objective, cost-effective tool for the early detection and monitoring of the progression of MA in AMD

    Automatic macular edema identification and characterization using OCT images

    Get PDF
    © 2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/. This version of the article: Samagaio, G., Estévez, A., Moura, J. de, Novo, J., Fernández, M. I., & Ortega, M. (2018). “Automatic macular edema identification and characterization using OCT images” has been accepted for publication in Computer Methods and Programs in Biomedicine, 163, 47–63. The Version of Record is available online at: https://doi.org/10.1016/j.cmpb.2018.05.033.[Abstract]: Background and objective: The detection and characterization of the intraretinal fluid accumulation constitutes a crucial ophthalmological issue as it provides useful information for the identification and diagnosis of the different types of Macular Edema (ME). These types are clinically defined, according to the clinical guidelines, as: Serous Retinal Detachment (SRD), Diffuse Retinal Thickening (DRT) and Cystoid Macular Edema (CME). Their accurate identification and characterization facilitate the diagnostic process, determining the disease severity and, therefore, allowing the clinicians to achieve more precise analysis and suitable treatments. Methods: This paper proposes a new fully automatic system for the identification and characterization of the three types of ME using Optical Coherence Tomography (OCT) images. In the case of SRD and CME edemas, multilevel image thresholding approaches were designed and combined with the application of ad-hoc clinical restrictions. The case of DRT edemas, given their complexity and fuzzy regional appearance, was approached by a learning strategy that exploits intensity, texture and clinical-based information to identify their presence. Results: The system provided satisfactory results with F-Measures of 87.54% and 91.99% for the DRT and CME detections, respectively. In the case of SRD edemas, the system correctly detected all the cases that were included in the designed dataset. Conclusions: The proposed methodology offered an accurate performance for the individual identification and characterization of the three different types of ME in OCT images. In fact, the method is capable to handle the ME analysis even in cases of significant severity with the simultaneous existence of the three ME types that appear merged inside the retinal layers.This work is supported by the Instituto de Salud Carlos III, Government of Spain and FEDER funds of the European Union through the PI14/02161 and the DTS15/00153 research projects and by the Ministerio de Economía y Competitividad, Government of Spain through the DPI2015-69948-R research project. Also, this work has received financial support from the European Union (European Regional Development Fund - ERDF) and the Xunta de Galicia, Centro singular de investigación de Galicia accreditation 2016–2019, Ref. ED431G/01; and Grupos de Referencia Competitiva, Ref. ED431C 2016-047.Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-04

    Annotated retinal optical coherence tomography images (AROI) database for joint retinal layer and fluid segmentation

    Get PDF
    Optical coherence tomography (OCT) images of the retina provide a structural representation and give an insight into the pathological changes present in age-related macular degeneration (AMD). Due to the three-dimensionality and complexity of the images, manual analysis of pathological features is difficult, time-consuming, and prone to subjectivity. Computer analysis of 3D OCT images is necessary to enable automated quantitative measuring of the features, objectively and repeatedly. As supervised and semi-supervised learning-based automatic segmentation depends on the training data and quality of annotations, we have created a new database of annotated retinal OCT images – the AROI database. It consists of 1136 images with annotations for pathological changes (fluid accumulation and related findings) and basic structures (layers) in patients with AMD. Inter- and intra-observer errors have been calculated in order to enable the validation of developed algorithms in relation to human variability. Also, we have performed the automatic segmentation with standard U-net architecture and two state-of-the-art architectures for medical image segmentation to set a baseline for further algorithm development and to get insight into challenges for automatic segmentation. To facilitate and encourage further research in the field, we have made the AROI database openly available

    Dual-Tree Complex Wavelet Input Transform for Cyst Segmentation in OCT Images Based on a Deep Learning Framework

    Get PDF
    Optical coherence tomography (OCT) represents a non-invasive, high-resolution cross-sectional imaging modality. Macular edema is the swelling of the macular region. Segmentation of fluid or cyst regions in OCT images is essential, to provide useful information for clinicians and prevent visual impairment. However, manual segmentation of fluid regions is a time-consuming and subjective procedure. Traditional and off-the-shelf deep learning methods fail to extract the exact location of the boundaries under complicated conditions, such as with high noise levels and blurred edges. Therefore, developing a tailored automatic image segmentation method that exhibits good numerical and visual performance is essential for clinical application. The dual-tree complex wavelet transform (DTCWT) can extract rich information from different orientations of image boundaries and extract details that improve OCT fluid semantic segmentation results in difficult conditions. This paper presents a comparative study of using DTCWT subbands in the segmentation of fluids. To the best of our knowledge, no previous studies have focused on the various combinations of wavelet transforms and the role of each subband in OCT cyst segmentation. In this paper, we propose a semantic segmentation composite architecture based on a novel U-net and information from DTCWT subbands. We compare different combination schemes, to take advantage of hidden information in the subbands, and demonstrate the performance of the methods under original and noise-added conditions. Dice score, Jaccard index, and qualitative results are used to assess the performance of the subbands. The combination of subbands yielded high Dice and Jaccard values, outperforming the other methods, especially in the presence of a high level of noise

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces

    Active and inactive microaneurysms identified and characterized by structural and angiographic optical coherence tomography

    Full text link
    Purpose: To characterize flow status within microaneurysms (MAs) and quantitatively investigate their relations with regional macular edema in diabetic retinopathy (DR). Design: Retrospective, cross-sectional study. Participants: A total of 99 participants, including 23 with mild nonproliferative DR (NPDR), 25 with moderate NPDR, 34 with severe NPDR, 17 with proliferative DR. Methods: In this study, 3x3-mm optical coherence tomography (OCT) and OCT angiography (OCTA) scans with a 400x400 sampling density from one eye of each participant were obtained using a commercial OCT system. Trained graders manually identified MAs and their location relative to the anatomic layers from cross-sectional OCT. Microaneurysms were first classified as active if the flow signal was present in the OCTA channel. Then active MAs were further classified into fully active and partially active MAs based on the flow perfusion status of MA on en face OCTA. The presence of retinal fluid near MAs was compared between active and inactive types. We also compared OCT-based MA detection to fundus photography (FP) and fluorescein angiography (FA)-based detection. Results: We identified 308 MAs (166 fully active, 88 partially active, 54 inactive) in 42 eyes using OCT and OCTA. Nearly half of the MAs identified straddle the inner nuclear layer and outer plexiform layer. Compared to partially active and inactive MAs, fully active MAs were more likely to be associated with local retinal fluid. The associated fluid volumes were larger with fully active MAs than with partially active and inactive MAs. OCT/OCTA detected all MAs found on FP. While not all MAs seen with FA were identified with OCT, some MAs seen with OCT were not visible with FA or FP. Conclusions: Co-registered OCT and OCTA can characterize MA activities, which could be a new means to study diabetic macular edema pathophysiology

    Deep Representation Learning with Limited Data for Biomedical Image Synthesis, Segmentation, and Detection

    Get PDF
    Biomedical imaging requires accurate expert annotation and interpretation that can aid medical staff and clinicians in automating differential diagnosis and solving underlying health conditions. With the advent of Deep learning, it has become a standard for reaching expert-level performance in non-invasive biomedical imaging tasks by training with large image datasets. However, with the need for large publicly available datasets, training a deep learning model to learn intrinsic representations becomes harder. Representation learning with limited data has introduced new learning techniques, such as Generative Adversarial Networks, Semi-supervised Learning, and Self-supervised Learning, that can be applied to various biomedical applications. For example, ophthalmologists use color funduscopy (CF) and fluorescein angiography (FA) to diagnose retinal degenerative diseases. However, fluorescein angiography requires injecting a dye, which can create adverse reactions in the patients. So, to alleviate this, a non-invasive technique needs to be developed that can translate fluorescein angiography from fundus images. Similarly, color funduscopy and optical coherence tomography (OCT) are also utilized to semantically segment the vasculature and fluid build-up in spatial and volumetric retinal imaging, which can help with the future prognosis of diseases. Although many automated techniques have been proposed for medical image segmentation, the main drawback is the model's precision in pixel-wise predictions. Another critical challenge in the biomedical imaging field is accurately segmenting and quantifying dynamic behaviors of calcium signals in cells. Calcium imaging is a widely utilized approach to studying subcellular calcium activity and cell function; however, large datasets have yielded a profound need for fast, accurate, and standardized analyses of calcium signals. For example, image sequences from calcium signals in colonic pacemaker cells ICC (Interstitial cells of Cajal) suffer from motion artifacts and high periodic and sensor noise, making it difficult to accurately segment and quantify calcium signal events. Moreover, it is time-consuming and tedious to annotate such a large volume of calcium image stacks or videos and extract their associated spatiotemporal maps. To address these problems, we propose various deep representation learning architectures that utilize limited labels and annotations to address the critical challenges in these biomedical applications. To this end, we detail our proposed semi-supervised, generative adversarial networks and transformer-based architectures for individual learning tasks such as retinal image-to-image translation, vessel and fluid segmentation from fundus and OCT images, breast micro-mass segmentation, and sub-cellular calcium events tracking from videos and spatiotemporal map quantification. We also illustrate two multi-modal multi-task learning frameworks with applications that can be extended to other domains of biomedical applications. The main idea is to incorporate each of these as individual modules to our proposed multi-modal frameworks to solve the existing challenges with 1) Fluorescein angiography synthesis, 2) Retinal vessel and fluid segmentation, 3) Breast micro-mass segmentation, and 4) Dynamic quantification of calcium imaging datasets
    • …
    corecore