2,363 research outputs found
Automated Segmentation of Retinal Optical Coherence Tomography Images
Aim. Optical Coherence Tomography (OCT) is a fast and non-invasive medical imaging technique which helps in the investigation of each individual retinal layer structure. For early detection of retinal diseases and the study of their progression, segmentation of the OCT images into the distinct layers of the retina plays a crucial role. However, segmentation done by the clinicians manually is extremely tedious, time-consuming and variable with respect to the expertise level. Hence, there is an utmost necessity to develop an automated segmentation algorithm for retinal OCT images which is fast, accurate, and eases clinical decision making.
Methods. Graph-theoretical methods have been implemented to develop an automated segmentation algorithm for spectral domain OCT (SD-OCT) images of the retina. As a pre-processing step, the best method for denoising the SD-OCT images prior to graph-based segmentation was determined by comparison between simple Gaussian filtering and an advanced wavelet-based denoising technique. A shortest-path based graph search technique was implemented to accurately delineate intra-retinal layer boundaries within the SD-OCT images. The results from the automated algorithm were also validated by comparison with manual segmentation done by an expert clinician using a specially designed graphical user interface (GUI).
Results. The algorithm delineated seven intra-retinal boundaries thereby segmenting six layers of the retina along with computing their thicknesses. The thickness results from the automated algorithm when compared to normative layer thickness values from a published study showed no significant differences (p > 0.05) for all layers except layer 4 (p = 0.04). Furthermore, when a comparative analysis was done between the results from the automated segmentation algorithm and that from manual segmentation by an expert, the accuracy of the algorithm varied between 74.58% (layer 2) to 98.90% (layer 5). Additionally, the comparison of two different denoising techniques revealed that there was no significant impact of an advanced wavelet-based denoising technique over the use of simple Gaussian filtering on the accuracy of boundary detection by the graph-based algorithm.
Conclusion. An automated graph-based algorithm was developed and implemented in this thesis for the segmentation of seven intra-retinal boundaries and six layers in SD-OCT images which is as good as manual segmentation by an expert clinician. This thesis also concludes on the note that simple Gaussian filters are sufficient to denoise the images in graph-based segmentation techniques and does not require an advanced denoising technique. This makes the complexity of implementation far more simple and efficient in terms of time and memory requirements
Recommended from our members
Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue
Ovarian cancer has the lowest survival rate among all gynecologic cancers predominantly due to late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depth-resolved, high-resolution images of biological tissue in real-time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must first be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluate a set of algorithms to segment OCT images of mouse ovaries. We examine five preprocessing techniques and seven segmentation algorithms. While all preprocessing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of
32
%
±
1.2
%
. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of
94.8
%
±
1.2
%
compared with manual segmentation. Even so, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.National Science Foundation Graduate Research Fellowship Program [DGE-1143953]; National Institutes of Health/National Cancer Institute [1R01CA195723]; University of Arizona Cancer Center [3P30CA023074]This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Using CycleGANs for effectively reducing image variability across OCT devices and improving retinal fluid segmentation
Optical coherence tomography (OCT) has become the most important imaging
modality in ophthalmology. A substantial amount of research has recently been
devoted to the development of machine learning (ML) models for the
identification and quantification of pathological features in OCT images. Among
the several sources of variability the ML models have to deal with, a major
factor is the acquisition device, which can limit the ML model's
generalizability. In this paper, we propose to reduce the image variability
across different OCT devices (Spectralis and Cirrus) by using CycleGAN, an
unsupervised unpaired image transformation algorithm. The usefulness of this
approach is evaluated in the setting of retinal fluid segmentation, namely
intraretinal cystoid fluid (IRC) and subretinal fluid (SRF). First, we train a
segmentation model on images acquired with a source OCT device. Then we
evaluate the model on (1) source, (2) target and (3) transformed versions of
the target OCT images. The presented transformation strategy shows an F1 score
of 0.4 (0.51) for IRC (SRF) segmentations. Compared with traditional
transformation approaches, this means an F1 score gain of 0.2 (0.12).Comment: * Contributed equally (order was defined by flipping a coin)
--------------- Accepted for publication in the "IEEE International Symposium
on Biomedical Imaging (ISBI) 2019
Automatic Detection of Cone Photoreceptors In Split Detector Adaptive Optics Scanning Light Ophthalmoscope Images
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice’s coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice’s coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images
A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head
Purpose: To develop a deep learning approach to de-noise optical coherence
tomography (OCT) B-scans of the optic nerve head (ONH).
Methods: Volume scans consisting of 97 horizontal B-scans were acquired
through the center of the ONH using a commercial OCT device (Spectralis) for
both eyes of 20 subjects. For each eye, single-frame (without signal
averaging), and multi-frame (75x signal averaging) volume scans were obtained.
A custom deep learning network was then designed and trained with 2,328 "clean
B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean
B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance
of the de-noising algorithm was assessed qualitatively, and quantitatively on
1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio
(CNR), and mean structural similarity index metrics (MSSIM).
Results: The proposed algorithm successfully denoised unseen single-frame OCT
B-scans. The denoised B-scans were qualitatively similar to their corresponding
multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR
increased from dB (single-frame) to dB
(denoised). For all the ONH tissues, the mean CNR increased from (single-frame) to (denoised). The MSSIM increased from
(single frame) to (denoised) when compared with
the corresponding multi-frame B-scans.
Conclusions: Our deep learning algorithm can denoise a single-frame OCT
B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior
quality OCT B-scans with reduced scanning times and minimal patient discomfort
Recommended from our members
M2U-net: Effective and efficient retinal vessel segmentation for real-world applications
In this paper, we present a novel neural network architecture for retinal vessel segmentation that improves over the state of the art on two benchmark datasets, is the first to run in real time on high resolution images, and its small memory and processing requirements make it deployable in mobile and embedded systems. The M2U-Net has a new encoder-decoder architecture that is inspired by the U-Net. It adds pretrained components of MobileNetV2 in the encoder part and novel contractive bottleneck blocks in the decoder part that, combined with bilinear upsampling, drastically reduce the parameter count to 0.55M compared to 31.03M in the original U-Net. We have evaluated its performance against a wide body of previously published results on three public datasets. On two of them, the M2U-Net achieves new state-of-the-art performance by a considerable margin. When implemented on a GPU, our method is the first to achieve real-time inference speeds on high-resolution fundus images. We also implemented our proposed network on an ARM-based embedded system where it segments images in between 0.6 and 15 sec, depending on the resolution. Thus, the M2U-Net enables a number of applications of retinal vessel structure extraction, such as early diagnosis of eye diseases, retinal biometric authentication systems, and robot assisted microsurgery
- …