2,363 research outputs found

    Automated Segmentation of Retinal Optical Coherence Tomography Images

    Get PDF
    Aim. Optical Coherence Tomography (OCT) is a fast and non-invasive medical imaging technique which helps in the investigation of each individual retinal layer structure. For early detection of retinal diseases and the study of their progression, segmentation of the OCT images into the distinct layers of the retina plays a crucial role. However, segmentation done by the clinicians manually is extremely tedious, time-consuming and variable with respect to the expertise level. Hence, there is an utmost necessity to develop an automated segmentation algorithm for retinal OCT images which is fast, accurate, and eases clinical decision making. Methods. Graph-theoretical methods have been implemented to develop an automated segmentation algorithm for spectral domain OCT (SD-OCT) images of the retina. As a pre-processing step, the best method for denoising the SD-OCT images prior to graph-based segmentation was determined by comparison between simple Gaussian filtering and an advanced wavelet-based denoising technique. A shortest-path based graph search technique was implemented to accurately delineate intra-retinal layer boundaries within the SD-OCT images. The results from the automated algorithm were also validated by comparison with manual segmentation done by an expert clinician using a specially designed graphical user interface (GUI). Results. The algorithm delineated seven intra-retinal boundaries thereby segmenting six layers of the retina along with computing their thicknesses. The thickness results from the automated algorithm when compared to normative layer thickness values from a published study showed no significant differences (p > 0.05) for all layers except layer 4 (p = 0.04). Furthermore, when a comparative analysis was done between the results from the automated segmentation algorithm and that from manual segmentation by an expert, the accuracy of the algorithm varied between 74.58% (layer 2) to 98.90% (layer 5). Additionally, the comparison of two different denoising techniques revealed that there was no significant impact of an advanced wavelet-based denoising technique over the use of simple Gaussian filtering on the accuracy of boundary detection by the graph-based algorithm. Conclusion. An automated graph-based algorithm was developed and implemented in this thesis for the segmentation of seven intra-retinal boundaries and six layers in SD-OCT images which is as good as manual segmentation by an expert clinician. This thesis also concludes on the note that simple Gaussian filters are sufficient to denoise the images in graph-based segmentation techniques and does not require an advanced denoising technique. This makes the complexity of implementation far more simple and efficient in terms of time and memory requirements

    Using CycleGANs for effectively reducing image variability across OCT devices and improving retinal fluid segmentation

    Full text link
    Optical coherence tomography (OCT) has become the most important imaging modality in ophthalmology. A substantial amount of research has recently been devoted to the development of machine learning (ML) models for the identification and quantification of pathological features in OCT images. Among the several sources of variability the ML models have to deal with, a major factor is the acquisition device, which can limit the ML model's generalizability. In this paper, we propose to reduce the image variability across different OCT devices (Spectralis and Cirrus) by using CycleGAN, an unsupervised unpaired image transformation algorithm. The usefulness of this approach is evaluated in the setting of retinal fluid segmentation, namely intraretinal cystoid fluid (IRC) and subretinal fluid (SRF). First, we train a segmentation model on images acquired with a source OCT device. Then we evaluate the model on (1) source, (2) target and (3) transformed versions of the target OCT images. The presented transformation strategy shows an F1 score of 0.4 (0.51) for IRC (SRF) segmentations. Compared with traditional transformation approaches, this means an F1 score gain of 0.2 (0.12).Comment: * Contributed equally (order was defined by flipping a coin) --------------- Accepted for publication in the "IEEE International Symposium on Biomedical Imaging (ISBI) 2019

    Automatic Detection of Cone Photoreceptors In Split Detector Adaptive Optics Scanning Light Ophthalmoscope Images

    Get PDF
    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice’s coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice’s coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images

    A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

    Full text link
    Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02±0.684.02 \pm 0.68 dB (single-frame) to 8.14±1.038.14 \pm 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50±0.563.50 \pm 0.56 (single-frame) to 7.63±1.817.63 \pm 1.81 (denoised). The MSSIM increased from 0.13±0.020.13 \pm 0.02 (single frame) to 0.65±0.030.65 \pm 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort
    corecore