5 research outputs found

    Methods for automated analysis of macular OCT data

    Get PDF
    Optical coherence tomography (OCT) is fast becoming one of the most important modalities for imaging the eye. It provides high resolution, cross-sectional images of the retina in three dimensions, distinctly showing its many layers. These layers are critical for normal eye function, and vision loss may occur when they are altered by disease. Specifically, the thickness of individual layers can change over time, thereby making the ability to accurately measure these thicknesses an important part of learning about how different diseases affect the eye. Since manual segmentation of the layers in OCT data is time consuming and tedious, automated methods are necessary to extract layer thicknesses. While a standard set of tools exist on the scanners to automatically segment the retina, the output is often limited, providing measurements restricted to only a few layers. Analysis of longitudinal data is also limited, with scans from the same subject often processed independently and registered using only a single landmark at the fovea. Quantification of other changes in the retina, including the accumulation of fluid, are also generally unavailable using the built-in software. In this thesis, we present four contributions for automatically processing OCT data, specifically for data acquired from the macular region of the retina. First, we present a layer segmentation algorithm to robustly segment the eight visible layers of the retina. Our approach combines the use of a random forest (RF) classifier, which produces boundary probabilities, with a boundary refinement algorithm to find surfaces maximizing the RF probabilities. Second, we present a pair of methods for processing longitudinal data from individual subjects: one combining registration and motion correction, and one for simultaneously segmenting the layers across all scans. Third, we develop a method for segmentation of microcystic macular edema, which appear as small, fluid-filled, cystoid spaces within the retina. Our approach again uses an RF classifier to produce a robust segmentation. Finally, we present the development of macular flatspace (MFS), a computational domain used to put data from different subjects in a common coordinate system where each layer appears flat, thereby simplifying any automated processing. We present two applications of MFS: inhomogeneity correction to normalize the intensities within each layer, and layer segmentation by adapting and simplifying a graph formulation used previously

    RETINAL OCT IMAGE ANALYSIS USING DEEP LEARNING

    Get PDF
    Optical coherence tomography (OCT) is a noninvasive imaging modality which uses low-coherence light waves to take cross-sectional images of optical scattering media. OCT has been widely used in diagnosing retinal and neural diseases by imaging the human retina. The thicknesses of retinal layers are important biomarkers for neurological diseases like multiple sclerosis (MS). The peripapillary retinal nerve fiber layer (pRNFL) and ganglion cell plus inner plexiform layer (GCIP) thickness can be used to assess the global disease progression of MS patients. Automated OCT image analysis tools are critical for quantitatively monitoring disease progression and exploring biomarkers. With the development of more powerful computational resources, deep learning based methods have achieved much better performance in accuracy, speed, and algorithm flexibility for many image analysis tasks. However, without task-specific modifications, these emerging deep learning methods are not satisfactory if directly applied to tasks like retinal layer segmentation. In this thesis, we present a set of novel deep learning based methods for OCT image analysis. Specifically, we focus on automated retinal layer segmentation from macular OCT images. The first problem we address is that existing deep learning methods do not incorporate explicit anatomical rules and cannot guarantee the layer segmentation hierarchy~(pixels of the upper layers should have no overlap or gap with pixels of layers beneath it). To solve this, we developed an efficient fully convolutional network to generate structured layer surfaces with correct topology that is also able to perform retinal lesion~(cysts or edema) segmentation. The second problem we addressed is that the segmentation uncertainty reduces the sensitivity of detecting mild retinal changes in MS patients over time. To solve this, we developed a longitudinal deep learning pipeline that considers both inter-slice and longitudinal segmentation priors to achieve a more consistent segmentation for monitoring patient-specific retinal changes. The third problem we addressed is that the performance of the deep learning models will degrade when test data is generated from different scanners~(domain shift). We address this problem by developing a novel test-time domain adaptation method. Different from existing solutions, our model can dynamically adapt to each test subject during inference without time-consuming retraining. Our deep networks achieved state-of-the-art segmentation accuracy, speed, and flexibility compared to the existing methods

    Deep Learning Approach for Automated Thickness Measurement of Epithelial Tissue and Scab using Optical Coherence Tomography

    Get PDF
    Significance: In order to elucidate therapeutic treatment to accelerate wound healing, it is crucial to understand the process underlying skin wound healing, especially re-epithelialization. Epidermis and scab detection is of importance in the wound healing process as their thickness is a vital indicator to judge whether the re-epithelialization process is normal or not. Since optical coherence tomography (OCT) is a real-time and non-invasive imaging technique that can perform a cross-sectional evaluation of tissue microstructure, it is an ideal imaging modality to monitor the thickness change of epidermal and scab tissues during wound healing processes in micron-level resolution. Traditional segmentation on epidermal and scab regions was performed manually, which is time-consuming and impractical in real-time.Aim: Develop a deep-learning-based skin layer segmentation method for automated quantitative assessment of the thickness of in-vivo epidermis and scab tissues during a time course of healing within a murine model.Approach: Five convolution neural networks (CNN) were trained using manually labelled epidermis and scab regions segmentation from 1000 OCT B-scan images (assisted by its corresponding angiographic information). The segmentation performance of five segmentation architectures were compared qualitatively and quantitatively for validation set.Results: Our results show higher accuracy and higher speed of the calculated thickness compared with human experts. The U-Net architecture represents a better performance than other deep neural network architectures with a 0.894 at F1-score, 0.875 at mean IOU, 0.933 at dice similarity coefficient, and 18.28 μm at an average symmetric surface distance. Furthermore, our algorithm is able to provide abundant quantitative parameters of the wound based on its corresponding thickness mapping in different healing phases. Among them, normalized epidermis thickness is recommended as an essential hallmark to describe the re-epithelialization process of the mouse model.Conclusions: The automatic segmentation and thickness measurements within different phases of wound healing data demonstrates that our pipeline provides a robust, quantitative, and accurate method for serving as a standard model for further research into effect of external pharmacological and physical factors
    corecore