341 research outputs found

    Entropy-based particle correspondence for shape populations

    Get PDF
    Statistical shape analysis of anatomical structures plays an important role in many medical image analysis applications such as understanding the structural changes in anatomy in various stages of growth or disease. Establishing accurate correspondence across object populations is essential for such statistical shape analysis studies

    Skeletal Shape Correspondence Through Entropy

    Get PDF
    We present a novel approach for improving the shape statistics of medical image objects by generating correspondence of skeletal points. Each object's interior is modeled by an s-rep, i.e., by a sampled, folded, two-sided skeletal sheet with spoke vectors proceeding from the skeletal sheet to the boundary. The skeleton is divided into three parts: the up side, the down side, and the fold curve. The spokes on each part are treated separately and, using spoke interpolation, are shifted along that skeleton in each training sample so as to tighten the probability distribution on those spokes' geometric properties while sampling the object interior regularly. As with the surface/boundary-based correspondence method of Cates et al., entropy is used to measure both the probability distribution tightness and the sampling regularity, here of the spokes' geometric properties. Evaluation on synthetic and real world lateral ventricle and hippocampus data sets demonstrate improvement in the performance of statistics using the resulting probability distributions. This improvement is greater than that achieved by an entropy-based correspondence method on the boundary points

    Doctor of Philosophy in Computing

    Get PDF
    dissertationStatistical shape analysis has emerged as an important tool for the quantitative analysis of anatomy in many medical imaging applications. The correspondence based approach to evaluate shape variability is a popular method, based on comparing configurations of carefully placed landmarks on each shape. In recent years, methods for automatic placement of landmarks have enhanced the ability of this approach to capture statistical properties of shape populations. However, biomedical shapes continue to present considerable difficulties in automatic correspondence optimization due to inherent geometric complexity and the need to correlate shape change with underlying biological parameters. This dissertation addresses these technical difficulties and presents improved shape correspondence models. In particular, this dissertation builds on the particle-based modeling (PBM) framework described by Joshua Cates' 2010 Ph.D. dissertation. In the PBM framework, correspondences are modeled as a set of dynamic points or a particle system, positioned automatically on shape surfaces by optimizing entropy contained in the model, with the idea of balancing model simplicity against accuracy of the particle system representation of shapes. This dissertation is a collection of four papers that extend the PBM framework to include shape regression and longitudinal analysis and also adds new methods to improve modeling of complex shapes. It also includes a summary of two applications from the field of orthopaedics. Technical details of the PBM framework are provided in Chapter 2, after which the first topic related to the study of shape change over time is addressed (Chapters 3 and 4). In analyses of normative growth or disease progression, shape regression models allow characterization of the underlying biological process while also facilitating comparison of a sample against a normative model. The first paper introduces a shape regression model into the PBM framework to characterize shape variability due to an underlying biological parameter. It further confirms the statistical significance of this relationship via systematic permutation testing. Simple regression models are, however, not sufficient to leverage information provided by longitudinal studies. Longitudinal studies collect data at multiple time points for each participant and have the potential to provide a rich picture of the anatomical changes occurring during development, disease progression, or recovery. The second paper presents a linear-mixed-effects (LME) shape model in order to fully leverage the high-dimensional, complex features provided by longitudinal data. The parameters of the LME shape model are estimated in a hierarchical manner within the PBM framework. The topic of geometric complexity present in certain biological shapes is addressed next (Chapters 5 and 6). Certain biological shapes are inherently complex and highly variable, inhibiting correspondence based methods from producing a faithful representation of the average shape. In the PBM framework, use of Euclidean distances leads to incorrect particle system interactions while a position-only representation leads to incorrect correspondences around sharp features across shapes. The third paper extends the PBM framework to use efficiently computed geodesic distances and also adds an entropy term based on the surface normal. The fourth paper further replaces the position-only representation with a more robust distance-from-landmark feature in the PBM framework to obtain isometry invariant correspondences. Finally, the above methods are applied to two applications from the field of orthopaedics. The first application uses correspondences across an ensemble of human femurs to characterize morphological shape differences due to femoroacetabular impingement. The second application involves an investigation of the short bone phenotype apparent in mouse models of multiple osteochondromas. Metaphyseal volume deviations are correlated with deviations in length to quantify the effect of cancer toward the apparent shortening of long bones (femur, tibia-fibula) in mouse models

    Groupwise shape correspondence with local features

    Get PDF
    Statistical shape analysis of anatomical structures plays an important role in many medical image analysis applications such as understanding the structural changes in anatomy in various stages of growth or disease. Establishing accurate correspondence across object populations is essential for such statistical shape analysis studies. However, anatomical correspondence is rarely a direct result of spatial proximity of sample points but rather depends on many other features such as local curvature, position with respect to blood vessels, or connectivity to other parts of the anatomy. This dissertation presents a novel method for computing point-based correspondence among populations of surfaces by combining spatial location of the sample points with non-spatial local features. A framework for optimizing correspondence using arbitrary local features is developed. The performance of the correspondence algorithm is objectively assessed using a set of evaluation metrics. The main focus of this research is on correspondence across human cortical surfaces. Statistical analysis of cortical thickness, which is key to many neurological research problems, is the driving problem. I show that incorporating geometric (sulcal depth) and non-geometric (DTI connectivity) knowledge about the cortex significantly improves cortical correspondence compared to existing techniques. Furthermore, I present a framework that is the first to allow the white matter fiber connectivity to be used for improving cortical correspondence

    Fitting Skeletal Object Models Using Spherical Harmonics Based Template Warping

    Get PDF
    We present a scheme that propagates a reference skeletal model (s-rep) into a particular case of an object, thereby propagating the initial shape-related layout of the skeleton-to-boundary vectors, called spokes. The scheme represents the surfaces of the template as well as the target objects by spherical harmonics and computes a warp between these via a thin plate spline. To form the propagated s-rep, it applies the warp to the spokes of the template s-rep and then statistically refines. This automatic approach promises to make s-rep fitting robust for complicated objects, which allows s-rep based statistics to be available to all. The improvement in fitting and statistics is significant compared with the previous methods and in statistics compared with a state-of-the-art boundary based method

    A Review of Subsequence Time Series Clustering

    Get PDF
    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies

    The Probabilistic Active Shape Model: From Model Construction to Flexible Medical Image Segmentation

    Get PDF
    Automatic processing of three-dimensional image data acquired with computed tomography or magnetic resonance imaging plays an increasingly important role in medicine. For example, the automatic segmentation of anatomical structures in tomographic images allows to generate three-dimensional visualizations of a patient’s anatomy and thereby supports surgeons during planning of various kinds of surgeries. Because organs in medical images often exhibit a low contrast to adjacent structures, and because the image quality may be hampered by noise or other image acquisition artifacts, the development of segmentation algorithms that are both robust and accurate is very challenging. In order to increase the robustness, the use of model-based algorithms is mandatory, as for example algorithms that incorporate prior knowledge about an organ’s shape into the segmentation process. Recent research has proven that Statistical Shape Models are especially appropriate for robust medical image segmentation. In these models, the typical shape of an organ is learned from a set of training examples. However, Statistical Shape Models have two major disadvantages: The construction of the models is relatively difficult, and the models are often used too restrictively, such that the resulting segmentation does not delineate the organ exactly. This thesis addresses both problems: The first part of the thesis introduces new methods for establishing correspondence between training shapes, which is a necessary prerequisite for shape model learning. The developed methods include consistent parameterization algorithms for organs with spherical and genus 1 topology, as well as a nonrigid mesh registration algorithm for shapes with arbitrary topology. The second part of the thesis presents a new shape model-based segmentation algorithm that allows for an accurate delineation of organs. In contrast to existing approaches, it is possible to integrate not only linear shape models into the algorithm, but also nonlinear shape models, which allow for a more specific description of an organ’s shape variation. The proposed segmentation algorithm is evaluated in three applications to medical image data: Liver and vertebra segmentation in contrast-enhanced computed tomography scans, and prostate segmentation in magnetic resonance images

    Cortical Surface Registration and Shape Analysis

    Get PDF
    A population analysis of human cortical morphometry is critical for insights into brain development or degeneration. Such an analysis allows for investigating sulcal and gyral folding patterns. In general, such a population analysis requires both a well-established cortical correspondence and a well-defined quantification of the cortical morphometry. The highly folded and convoluted structures render a reliable and consistent population analysis challenging. Three key challenges have been identified for such an analysis: 1) consistent sulcal landmark extraction from the cortical surface to guide better cortical correspondence, 2) a correspondence establishment for a reliable and stable population analysis, and 3) quantification of the cortical folding in a more reliable and biologically meaningful fashion. The main focus of this dissertation is to develop a fully automatic pipeline that supports a population analysis of local cortical folding changes. My proposed pipeline consists of three novel components I developed to overcome the challenges in the population analysis: 1) automatic sulcal curve extraction for stable/reliable anatomical landmark selection, 2) group-wise registration for establishing cortical shape correspondence across a population with no template selection bias, and 3) quantification of local cortical folding using a novel cortical-shape-adaptive kernel. To evaluate my methodological contributions, I applied all of them in an application to early postnatal brain development. I studied the human cortical morphological development using the proposed quantification of local cortical folding from neonate age to 1 year and 2 years of age, with quantitative developmental assessments. This study revealed a novel pattern of associations between the cortical gyrification and cognitive development.Doctor of Philosoph

    Efficient Point-Cloud Processing with Primitive Shapes

    Get PDF
    This thesis presents methods for efficient processing of point-clouds based on primitive shapes. The set of considered simple parametric shapes consists of planes, spheres, cylinders, cones and tori. The algorithms developed in this work are targeted at scenarios in which the occurring surfaces can be well represented by this set of shape primitives which is the case in many man-made environments such as e.g. industrial compounds, cities or building interiors. A primitive subsumes a set of corresponding points in the point-cloud and serves as a proxy for them. Therefore primitives are well suited to directly address the unavoidable oversampling of large point-clouds and lay the foundation for efficient point-cloud processing algorithms. The first contribution of this thesis is a novel shape primitive detection method that is efficient even on very large and noisy point-clouds. Several applications for the detected primitives are subsequently explored, resulting in a set of novel algorithms for primitive-based point-cloud processing in the areas of compression, recognition and completion. Each of these application directly exploits and benefits from one or more of the detected primitives' properties such as approximation, abstraction, segmentation and continuability

    Modelling discrepancy in Bayesian calibration of reservoir models

    Get PDF
    Simulation models of physical systems such as oil field reservoirs are subject to numerous uncertainties such as observation errors and inaccurate initial and boundary conditions. However, after accounting for these uncertainties, it is usually observed that the mismatch between the simulator output and the observations remains and the model is still inadequate. This incapability of computer models to reproduce the real-life processes is referred to as model inadequacy. This thesis presents a comprehensive framework for modelling discrepancy in the Bayesian calibration and probabilistic forecasting of reservoir models. The framework efficiently implements data-driven approaches to handle uncertainty caused by ignoring the modelling discrepancy in reservoir predictions using two major hierarchical strategies, parametric and non-parametric hierarchical models. The central focus of this thesis is on an appropriate way of modelling discrepancy and the importance of the model selection in controlling overfitting rather than different solutions to different noise models. The thesis employs a model selection code to obtain the best candidate solutions to the form of non-parametric error models. This enables us to, first, interpolate the error in history period and, second, propagate it towards unseen data (i.e. error generalisation). The error models constructed by inferring parameters of selected models can predict the response variable (e.g. oil rate) at any point in input space (e.g. time) with corresponding generalisation uncertainty. In the real field applications, the error models reliably track down the uncertainty regardless of the type of the sampling method and achieve a better model prediction score compared to the models that ignore discrepancy. All the case studies confirm the enhancement of field variables prediction when the discrepancy is modelled. As for the model parameters, hierarchical error models render less global bias concerning the reference case. However, in the considered case studies, the evidence for better prediction of each of the model parameters by error modelling is inconclusive
    • …
    corecore