990 research outputs found

    Fast colon centreline calculation using optimised 3D topological thinning

    Get PDF
    Topological thinning can be used to accurately identify the central path through a computer model of the colon generated using computed tomography colonography. The central path can subsequently be used to simplify the task of navigation within the colon model. Unfortunately standard topological thinning is an extremely inefficient process. We present an optimised version of topological thinning that significantly improves the performance of centreline calculation without compromising the accuracy of the result. This is achieved by using lookup tables to reduce the computational burden associated with the thinning process

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Vascular Tree Structure: Fast Curvature Regularization and Validation

    Get PDF
    This work addresses the challenging problem of accurate vessel structure analysis in high resolution 3D biomedical images. Typical segmentation methods fail on recent micro-CT data sets resolving near-capillary vessels due to limitations of standard first-order regularization models. While regularization is needed to address noise and partial volume issues in the data, we argue that extraction of thin tubular structures requires higher-order curvature-based regularization. There are no standard segmentation methods regularizing surface curvature in 3D that could be applied to large 3D volumes. However, we observe that standard measures for vessels structure are more concerned with topology, bifurcation angles, and other parameters that can be directly addressed without segmentation. We propose a novel methodology reconstructing tree structure of the vessels using a new centerline curvature regularization technique. Our high-order regularization model is based on a recent curvature estimation method. We developed a Levenberg-Marquardt optimization scheme and an efficient GPU-based implementation of our algorithm. We also propose a validation mechanism based on synthetic vessel images. Our preliminary results on real ultra-resolution micro CT volumes are promising

    A Robust Level-Set Algorithm for Centerline Extraction

    Get PDF

    A Robust Level-Set Algorithm for Centerline Extraction

    Get PDF
    We present a robust method for extracting 3D centerlines from volumetric datasets. We start from a 2D skeletonization method to locate voxels centered with respect to three orthogonal slicing directions. Next, we introduce a new detection criterion to extract the centerline voxels from the above skeletons, followed by a thinning, reconnection, and a ranking step. Overall, the proposed method produces centerlines that are object-centered, connected, one voxel thick, robust with respect to object noisiness, handles arbitrary object topologies, comes with a simple pruning threshold, and is fast to compute. We compare our results with two other methods on a variety of real-world datasets.

    Doctor of Philosophy

    Get PDF
    dissertationHigh arterial tortuosity, or twistedness, is a sign of many vascular diseases. Some ocular diseases are clinically diagnosed in part by assessment of increased tortuosity of ocular blood vessels. Increased arterial tortuosity is seen in other vascular diseases but is not commonly used for clinical diagnosis. This study develops the use of existing magnetic resonance angiography (MRA) image data to study arterial tortuosity in a range of arteries of hypertensive and intracranial aneurysm patients. The accuracy of several centerline extraction algorithms based on Dijkstra's algorithm was measured in numeric phantoms. The stability of the algorithms was measured in brain arteries. A centerline extraction algorithm was selected based on its accuracy. A centerline tortuosity metric was developed using a curve of tortuosity scores. This tortuosity metric was tested on phantoms and compared to observer-based tortuosity rankings on a test data set. The tortuosity metric was then used to measure and compare with negative controls the tortuosity of brain arteries from intracranial aneurysm and hypertension patients. A Dijkstra based centerline extraction algorithm employing a distance-from-edge weighted center of mass (DFE-COM) cost function of the segmented arteries was selected based on generating 15/16 anatomically correct centerlines in a looping artery iv compared to 15/16 for the center of mass (COM) cost function and 7/16 for the inverse modified distance from edge cost function. The DFE-COM cost function had a lower root mean square error in a lopsided phantom (0.413) than the COM cost function (0.879). The tortuosity metric successfully ordered electronic phantoms of arteries by tortuosity. The tortuosity metric detected an increase in arterial tortuosity in hypertensive patients in 13/13 (10/13 significant at α = 0.05). The metric detected increased tortuosity in a subset of the aneurysm patients with Loeys-Dietz syndrome (LDS) in 7/7 (three significant at α = 0.001). The tortuosity measurement combination of the centerline algorithm and the distance factor metric tortuosity curve was able to detect increases in arterial tortuosity in hypertensives and LDS patients. Therefore the methods validated here can be used to study arterial tortuosity in other hypertensive population samples and in genetic subsets related to LDS

    GeoFlood: Large-Scale Flood Inundation Mapping Based on High-Resolution Terrain Analysis

    Get PDF
    Recent floods from intense storms in the southern United States and the unusually active 2017 Atlantic hurricane season have highlighted the need for real‐time flood inundation mapping using high‐resolution topography. High‐resolution topographic data derived from lidar technology reveal unprecedented topographic details and are increasingly available, providing extremely valuable information for improving inundation mapping accuracy. The enrichment of terrain details from these data sets, however, also brings challenges to the application of many classic approaches designed for lower‐resolution data. Advanced methods need to be developed to better use lidar‐derived terrain data for inundation mapping. We present a new workflow, GeoFlood, for flood inundation mapping using high‐resolution terrain inputs that is simple and computationally efficient, thus serving the needs of emergency responders to rapidly identify possibly flooded locations. First, GeoNet, a method for automatic channel network extraction from high‐resolution topographic data, is modified to produce a low‐density, high‐fidelity river network. Then, a Height Above Nearest Drainage (HAND) raster is computed to quantify the elevation difference between each land surface cell and the stream bed cell to which it drains, using the network extracted from high‐resolution terrain data. This HAND raster is then used to compute reach‐average channel hydraulic parameters and synthetic stage‐discharge rating curves. Inundation maps are generated from the HAND raster by obtaining a water depth for a given flood discharge from the synthetic rating curve. We evaluate our approach by applying it in the Onion Creek Watershed in Central Texas, comparing the inundation extent results to Federal Emergency Management Agency 100‐yr floodplains obtained with detailed local hydraulic studies. We show that the inundation extent produced by GeoFlood overlaps with 60%~90% of the Federal Emergency Management Agency floodplain coverage demonstrating that it is able to capture the general inundation patterns and shows significant potential for informing real‐time flood disaster preparedness and response
    corecore