4 research outputs found

    Methods for automated analysis of macular OCT data

    Get PDF
    Optical coherence tomography (OCT) is fast becoming one of the most important modalities for imaging the eye. It provides high resolution, cross-sectional images of the retina in three dimensions, distinctly showing its many layers. These layers are critical for normal eye function, and vision loss may occur when they are altered by disease. Specifically, the thickness of individual layers can change over time, thereby making the ability to accurately measure these thicknesses an important part of learning about how different diseases affect the eye. Since manual segmentation of the layers in OCT data is time consuming and tedious, automated methods are necessary to extract layer thicknesses. While a standard set of tools exist on the scanners to automatically segment the retina, the output is often limited, providing measurements restricted to only a few layers. Analysis of longitudinal data is also limited, with scans from the same subject often processed independently and registered using only a single landmark at the fovea. Quantification of other changes in the retina, including the accumulation of fluid, are also generally unavailable using the built-in software. In this thesis, we present four contributions for automatically processing OCT data, specifically for data acquired from the macular region of the retina. First, we present a layer segmentation algorithm to robustly segment the eight visible layers of the retina. Our approach combines the use of a random forest (RF) classifier, which produces boundary probabilities, with a boundary refinement algorithm to find surfaces maximizing the RF probabilities. Second, we present a pair of methods for processing longitudinal data from individual subjects: one combining registration and motion correction, and one for simultaneously segmenting the layers across all scans. Third, we develop a method for segmentation of microcystic macular edema, which appear as small, fluid-filled, cystoid spaces within the retina. Our approach again uses an RF classifier to produce a robust segmentation. Finally, we present the development of macular flatspace (MFS), a computational domain used to put data from different subjects in a common coordinate system where each layer appears flat, thereby simplifying any automated processing. We present two applications of MFS: inhomogeneity correction to normalize the intensities within each layer, and layer segmentation by adapting and simplifying a graph formulation used previously

    Doctor of Philosophy

    Get PDF
    dissertationImage segmentation entails the partitioning of an image domain, usually two or three dimensions, so that each partition or segment has some meaning that is relevant to the application at hand. Accurate image segmentation is a crucial challenge in many disciplines, including medicine, computer vision, and geology. In some applications, heterogeneous pixel intensities; noisy, ill-defined, or diffusive boundaries; and irregular shapes with high variability can make it challenging to meet accuracy requirements. Various segmentation approaches tackle such challenges by casting the segmentation problem as an energy-minimization problem, and solving it using efficient optimization algorithms. These approaches are broadly classified as either region-based or edge (surface)-based depending on the features on which they operate. The focus of this dissertation is on the development of a surface-based energy model, the design of efficient formulations of optimization frameworks to incorporate such energy, and the solution of the energy-minimization problem using graph cuts. This dissertation utilizes a set of four papers whose motivation is the efficient extraction of the left atrium wall from the late gadolinium enhancement magnetic resonance imaging (LGE-MRI) image volume. This dissertation utilizes these energy formulations for other applications, including contact lens segmentation in the optical coherence tomography (OCT) data and the extraction of geologic features in seismic data. Chapters 2 through 5 (papers 1 through 4) explore building a surface-based image segmentation model by progressively adding components to improve its accuracy and robustness. The first paper defines a parametric search space and its discrete formulation in the form of a multilayer three-dimensional mesh model within which the segmentation takes place. It includes a generative intensity model, and we optimize using a graph formulation of the surface net problem. The second paper proposes a Bayesian framework with a Markov random field (MRF) prior that gives rise to another class of surface nets, which provides better segmentation with smooth boundaries. The third paper presents a maximum a posteriori (MAP)-based surface estimation framework that relies on a generative image model by incorporating global shape priors, in addition to the MRF, within the Bayesian formulation. Thus, the resulting surface not only depends on the learned model of shapes,but also accommodates the test data irregularities through smooth deviations from these priors. Further, the paper proposes a new shape parameter estimation scheme, in closed form, for segmentation as a part of the optimization process. Finally, the fourth paper (under review at the time of this document) presents an extensive analysis of the MAP framework and presents improved mesh generation and generative intensity models. It also performs a thorough analysis of the segmentation results that demonstrates the effectiveness of the proposed method qualitatively, quantitatively, and clinically. Chapter 6, consisting of unpublished work, demonstrates the application of an MRF-based Bayesian framework to segment coupled surfaces of contact lenses in optical coherence tomography images. This chapter also shows an application related to the extraction of geological structures in seismic volumes. Due to the large sizes of seismic volume datasets, we also present fast, approximate surface-based energy minimization strategies that achieve better speed-ups and memory consumption
    corecore