53 research outputs found

    Geodesic Graph Cut Based Retinal Fluid Segmentation in Optical Coherence Tomography

    Full text link
    Age-related macular degeneration (AMD) is a leading cause of blindness in developed countries. Its most damaging form is characterized by accumulation of fluid inside the retina, whose quantification is of utmost importance for evaluating the disease progression. In this paper we propose an automated method for retinal fluid segmentation from 3D images acquired with optical coherence tomography (OCT). It combines a machine learning approach with an effective segmentation framework based on geodesic graph cut. After an image preprocessing step, an artificial neural network is trained based on textural features to assign to each voxel a probability of belonging to a fluid. The obtained probability maps are used to compute minimal geodesic distances from a set of identified seed points to the remaining unassigned voxels. Finally, the segmentation is solved optimally and efficiently using graph cut optimization. The method is evaluated on a clinical longitudinal dataset consisting of 30 OCT scans from 10 patients taken at 3 different stages of treatment. Manual annotations from two retinal specialists were taken as the gold standard. The segmentation method achieved mean precision of 0.88 and recall of 0.83, with the combined F1 score of 0.85. The segmented fluid volumes were within the measured inter-observer variability. The results demonstrate that the proposed method is a promising step towards accurate quantification of retinal fluid

    SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT

    Full text link
    The Segment Anything Model (SAM) has gained significant attention in the field of image segmentation due to its impressive capabilities and prompt-based interface. While SAM has already been extensively evaluated in various domains, its adaptation to retinal OCT scans remains unexplored. To bridge this research gap, we conduct a comprehensive evaluation of SAM and its adaptations on a large-scale public dataset of OCTs from RETOUCH challenge. Our evaluation covers diverse retinal diseases, fluid compartments, and device vendors, comparing SAM against state-of-the-art retinal fluid segmentation methods. Through our analysis, we showcase adapted SAM's efficacy as a powerful segmentation model in retinal OCT scans, although still lagging behind established methods in some circumstances. The findings highlight SAM's adaptability and robustness, showcasing its utility as a valuable tool in retinal OCT image analysis and paving the way for further advancements in this domain

    Using CycleGANs for effectively reducing image variability across OCT devices and improving retinal fluid segmentation

    Full text link
    Optical coherence tomography (OCT) has become the most important imaging modality in ophthalmology. A substantial amount of research has recently been devoted to the development of machine learning (ML) models for the identification and quantification of pathological features in OCT images. Among the several sources of variability the ML models have to deal with, a major factor is the acquisition device, which can limit the ML model's generalizability. In this paper, we propose to reduce the image variability across different OCT devices (Spectralis and Cirrus) by using CycleGAN, an unsupervised unpaired image transformation algorithm. The usefulness of this approach is evaluated in the setting of retinal fluid segmentation, namely intraretinal cystoid fluid (IRC) and subretinal fluid (SRF). First, we train a segmentation model on images acquired with a source OCT device. Then we evaluate the model on (1) source, (2) target and (3) transformed versions of the target OCT images. The presented transformation strategy shows an F1 score of 0.4 (0.51) for IRC (SRF) segmentations. Compared with traditional transformation approaches, this means an F1 score gain of 0.2 (0.12).Comment: * Contributed equally (order was defined by flipping a coin) --------------- Accepted for publication in the "IEEE International Symposium on Biomedical Imaging (ISBI) 2019

    Self-supervised learning via inter-modal reconstruction and feature projection networks for label-efficient 3D-to-2D segmentation

    Full text link
    Deep learning has become a valuable tool for the automation of certain medical image segmentation tasks, significantly relieving the workload of medical specialists. Some of these tasks require segmentation to be performed on a subset of the input dimensions, the most common case being 3D-to-2D. However, the performance of existing methods is strongly conditioned by the amount of labeled data available, as there is currently no data efficient method, e.g. transfer learning, that has been validated on these tasks. In this work, we propose a novel convolutional neural network (CNN) and self-supervised learning (SSL) method for label-efficient 3D-to-2D segmentation. The CNN is composed of a 3D encoder and a 2D decoder connected by novel 3D-to-2D blocks. The SSL method consists of reconstructing image pairs of modalities with different dimensionality. The approach has been validated in two tasks with clinical relevance: the en-face segmentation of geographic atrophy and reticular pseudodrusen in optical coherence tomography. Results on different datasets demonstrate that the proposed CNN significantly improves the state of the art in scenarios with limited labeled data by up to 8% in Dice score. Moreover, the proposed SSL method allows further improvement of this performance by up to 23%, and we show that the SSL is beneficial regardless of the network architecture.Comment: To appear in MICCAI 2023. Code: https://github.com/j-morano/multimodal-ssl-fp

    Predicting Drusen Regression from OCT in Patients with Age-Related Macular Degeneration

    Get PDF
    Age-related macular degeneration (AMD) is a leading cause of blindness in developed countries. The presence of drusen is the hallmark of early/intermediate AMD, and their sudden regression is strongly associated with the onset of late AMD. In this work we propose a predictive model of drusen regression using optical coherence tomography (OCT) based features. First, a series of automated image analysis steps are applied to segment and characterize individual drusen and their development. Second, from a set of quantitative features, a random forest classifiser is employed to predict the occurrence of individual drusen regression within the following 12 months. The predictive model is trained and evaluated on a longitudinal OCT dataset of 44 eyes from 26 patients using leave-one-patient-out cross-validation. The model achieved an area under the ROC curve of 0.81, with a sensitivity of 0.74 and a specificity of 0.73. The presence of hyperreflective foci and mean drusen signal intensity were found to be the two most important features for the prediction. This preliminary study shows that predicting drusen regression is feasible and is a promising step toward identification of imaging biomarkers of incoming regression

    An amplified-target loss approach for photoreceptor layer segmentation in pathological OCT scans

    Full text link
    Segmenting anatomical structures such as the photoreceptor layer in retinal optical coherence tomography (OCT) scans is challenging in pathological scenarios. Supervised deep learning models trained with standard loss functions are usually able to characterize only the most common disease appeareance from a training set, resulting in suboptimal performance and poor generalization when dealing with unseen lesions. In this paper we propose to overcome this limitation by means of an augmented target loss function framework. We introduce a novel amplified-target loss that explicitly penalizes errors within the central area of the input images, based on the observation that most of the challenging disease appeareance is usually located in this area. We experimentally validated our approach using a data set with OCT scans of patients with macular diseases. We observe increased performance compared to the models that use only the standard losses. Our proposed loss function strongly supports the segmentation model to better distinguish photoreceptors in highly pathological scenarios.Comment: Accepted for publication at MICCAI-OMIA 201
    corecore