3,850 research outputs found

    Two-dimensional segmentation of the retinal vascular network from optical coherence tomography

    Get PDF
    The automatic segmentation of the retinal vascular network from ocular fundus images has been performed by several research groups. Although different approaches have been proposed for traditional imaging modalities, only a few have addressed this problem for optical coherence tomography (OCT). Furthermore, these approaches were focused on the optic nerve head region. Compared to color fundus photography and fluorescein angiography, two-dimensional ocular fundus reference images computed from three-dimensional OCT data present additional problems related to system lateral resolution, image contrast, and noise. Specifically, the combination of system lateral resolution and vessel diameter in the macular region renders the process particularly complex, which might partly explain the focus on the optic disc region. In this report, we describe a set of features computed from standard OCT data of the human macula that are used by a supervised-learning process (support vector machines) to automatically segment the vascular network. For a set of macular OCT scans of healthy subjects and diabetic patients, the proposed method achieves 98% accuracy, 99% specificity, and 83% sensitivity. This method was also tested on OCT data of the optic nerve head region achieving similar results

    Two-dimensional segmentation of the retinal vascular network from optical coherence tomography

    Get PDF
    The automatic segmentation of the retinal vascular network from ocular fundus images has been performed by several research groups. Although different approaches have been proposed for traditional imaging modalities, only a few have addressed this problem for optical coherence tomography (OCT). Furthermore, these approaches were focused on the optic nerve head region. Compared to color fundus photography and fluorescein angiography, two-dimensional ocular fundus reference images computed from three-dimensional OCT data present additional problems related to system lateral resolution, image contrast, and noise. Specifically, the combination of system lateral resolution and vessel diameter in the macular region renders the process particularly complex, which might partly explain the focus on the optic disc region. In this report, we describe a set of features computed from standard OCT data of the human macula that are used by a supervised-learning process (support vector machines) to automatically segment the vascular network. For a set of macular OCT scans of healthy subjects and diabetic patients, the proposed method achieves 98% accuracy, 99% specificity, and 83% sensitivity. This method was also tested on OCT data of the optic nerve head region achieving similar results

    Tracking and diameter estimation of retinal vessels using Gaussian process and Radon transform

    Get PDF
    Extraction of blood vessels in retinal images is an important step for computer-aided diagnosis of ophthalmic pathologies. We propose an approach for blood vessel tracking and diameter estimation. We hypothesize that the curvature and the diameter of blood vessels are Gaussian processes (GPs). Local Radon transform, which is robust against noise, is subsequently used to compute the features and train the GPs. By learning the kernelized covariance matrix from training data, vessel direction and its diameter are estimated. In order to detect bifurcations, multiple GPs are used and the difference between their corresponding predicted directions is quantified. The combination of Radon features and GP results in a good performance in the presence of noise. The proposed method successfully deals with typically difficult cases such as bifurcations and central arterial reflex, and also tracks thin vessels with high accuracy. Experiments are conducted on the publicly available DRIVE, STARE, CHASEDB1, and high-resolution fundus databases evaluating sensitivity, specificity, and Matthew’s correlation coefficient (MCC). Experimental results on these datasets show that the proposed method reaches an average sensitivity of 75.67%, specificity of 97.46%, and MCC of 72.18% which is comparable to the state-of-the-art

    Generalizable automated pixel-level structural segmentation of medical and biological data

    Get PDF
    Over the years, the rapid expansion in imaging techniques and equipments has driven the demand for more automation in handling large medical and biological data sets. A wealth of approaches have been suggested as optimal solutions for their respective imaging types. These solutions span various image resolutions, modalities and contrast (staining) mechanisms. Few approaches generalise well across multiple image types, contrasts or resolution. This thesis proposes an automated pixel-level framework that addresses 2D, 2D+t and 3D structural segmentation in a more generalizable manner, yet has enough adaptability to address a number of specific image modalities, spanning retinal funduscopy, sequential fluorescein angiography and two-photon microscopy. The pixel-level segmentation scheme involves: i ) constructing a phase-invariant orientation field of the local spatial neighbourhood; ii ) combining local feature maps with intensity-based measures in a structural patch context; iii ) using a complex supervised learning process to interpret the combination of all the elements in the patch in order to reach a classification decision. This has the advantage of transferability from retinal blood vessels in 2D to neural structures in 3D. To process the temporal components in non-standard 2D+t retinal angiography sequences, we first introduce a co-registration procedure: at the pairwise level, we combine projective RANSAC with a quadratic homography transformation to map the coordinate systems between any two frames. At the joint level, we construct a hierarchical approach in order for each individual frame to be registered to the global reference intra- and inter- sequence(s). We then take a non-training approach that searches in both the spatial neighbourhood of each pixel and the filter output across varying scales to locate and link microvascular centrelines to (sub-) pixel accuracy. In essence, this \link while extract" piece-wise segmentation approach combines the local phase-invariant orientation field information with additional local phase estimates to obtain a soft classification of the centreline (sub-) pixel locations. Unlike retinal segmentation problems where vasculature is the main focus, 3D neural segmentation requires additional exibility, allowing a variety of structures of anatomical importance yet with different geometric properties to be differentiated both from the background and against other structures. Notably, cellular structures, such as Purkinje cells, neural dendrites and interneurons, all display certain elongation along their medial axes, yet each class has a characteristic shape captured by an orientation field that distinguishes it from other structures. To take this into consideration, we introduce a 5D orientation mapping to capture these orientation properties. This mapping is incorporated into the local feature map description prior to a learning machine. Extensive performance evaluations and validation of each of the techniques presented in this thesis is carried out. For retinal fundus images, we compute Receiver Operating Characteristic (ROC) curves on existing public databases (DRIVE & STARE) to assess and compare our algorithms with other benchmark methods. For 2D+t retinal angiography sequences, we compute the error metrics ("Centreline Error") of our scheme with other benchmark methods. For microscopic cortical data stacks, we present segmentation results on both surrogate data with known ground-truth and experimental rat cerebellar cortex two-photon microscopic tissue stacks.Open Acces
    • …
    corecore