6 research outputs found

    Gaussian Process Morphable Models

    Get PDF
    Statistical shape models (SSMs) represent a class of shapes as a normal distribution of point variations, whose parameters are estimated from example shapes. Principal component analysis (PCA) is applied to obtain a low-dimensional representation of the shape variation in terms of the leading principal components. In this paper, we propose a generalization of SSMs, called Gaussian Process Morphable Models (GPMMs). We model the shape variations with a Gaussian process, which we represent using the leading components of its Karhunen-Loeve expansion. To compute the expansion, we make use of an approximation scheme based on the Nystrom method. The resulting model can be seen as a continuous analogon of an SSM. However, while for SSMs the shape variation is restricted to the span of the example data, with GPMMs we can define the shape variation using any Gaussian process. For example, we can build shape models that correspond to classical spline models, and thus do not require any example data. Furthermore, Gaussian processes make it possible to combine different models. For example, an SSM can be extended with a spline model, to obtain a model that incorporates learned shape characteristics, but is flexible enough to explain shapes that cannot be represented by the SSM. We introduce a simple algorithm for fitting a GPMM to a surface or image. This results in a non-rigid registration approach, whose regularization properties are defined by a GPMM. We show how we can obtain different registration schemes,including methods for multi-scale, spatially-varying or hybrid registration, by constructing an appropriate GPMM. As our approach strictly separates modelling from the fitting process, this is all achieved without changes to the fitting algorithm. We show the applicability and versatility of GPMMs on a clinical use case, where the goal is the model-based segmentation of 3D forearm images

    Predefined pattern detection in large time series

    Get PDF
    Predefined pattern detection from time series is an interesting and challenging task. In order to reduce its computational cost and increase effectiveness, a number of time series representation methods and similarity measures have been proposed. Most of the existing methods focus on full sequence matching, that is, sequences with clearly defined beginnings and endings, where all data points contribute to the match. These methods, however, do not account for temporal and magnitude deformations in the data and result to be ineffective on several real-world scenarios where noise and external phenomena introduce diversity in the class of patterns to be matched. In this paper, we present a novel pattern detection method, which is based on the notions of templates, landmarks, constraints and trust regions. We employ the Minimum Description Length (MDL) principle for time series preprocessing step, which helps to preserve all the prominent features and prevents the template from overfitting. Templates are provided by common users or domain experts, and represent interesting patterns we want to detect from time series. Instead of utilising templates to match all the potential subsequences in the time series, we translate the time series and templates into landmark sequences, and detect patterns from landmark sequence of the time series. Through defining constraints within the template landmark sequence, we effectively extract all the landmark subsequences from the time series landmark sequence, and obtain a number of landmark segments (time series subsequences or instances). We model each landmark segment through scaling the template in both temporal and magnitude dimensions. To suppress the influence of noise, we introduce the concept oftrust region, which not only helps to achieve an improved instance model, but also helps to catch the accurate boundaries of instances of the given template. Based on the similarities derived from instance models, we introduce the probability density function to calculate a similarity threshold. The threshold can be used to judge if a landmark segment is a true instance of the given template or not. To evaluate the effectiveness and efficiency of the proposed method, we apply it to two real-world datasets. The results show that our method is capable of detecting patterns of temporal and magnitude deformations with competitive performance

    3D approximation of scapula bone shape from 2D X-ray images using landmark-constrained statistical shape model fitting

    Get PDF
    Two-dimensional X-ray imaging is the dominant imaging modality in low-resource countries despite the existence of three-dimensional (3D) imaging modalities. This is because fewer hospitals in low-resource countries can afford the 3D imaging systems as their acquisition and operation costs are higher. However, 3D images are desirable in a range of clinical applications, for example surgical planning. The aim of this research was to develop a tool for 3D approximation of scapula bone from 2D X-ray images using landmark-constrained statistical shape model fitting. First, X-ray stereophotogrammetry was used to reconstruct the 3D coordinates of points located on 2D X-ray images of the scapula, acquired from two perspectives. A suitable calibration frame was used to map the image coordinates to their corresponding 3D realworld coordinates. The 3D point localization yielded average errors of (0.14, 0.07, 0.04) mm in the X, Y and Z coordinates respectively, and an absolute reconstruction error of 0.19 mm. The second phase assessed the reproducibility of the scapula landmarks reported by Ohl et al. (2010) and Borotikar et al. (2015). Only three (the inferior angle, acromion and the coracoid process) of the eight reproducible landmarks considered were selected as these were identifiable from the two different perspectives required for X-ray stereophotogrammetry in this project. For the last phase, an approximation of a scapula was produced with the aid of a statistical shape model (SSM) built from a training dataset of 84 CT scapulae. This involved constraining an SSM to the 3D reconstructed coordinates of the selected reproducible landmarks from 2D X-ray images. Comparison of the approximate model with a CT-derived ground truth 3D segmented volume resulted in surface-to-surface average distances of 4.28 mm and 3.20 mm, using three and sixteen landmarks respectively. Hence, increasing the number of landmarks produces a posterior model that makes better predictions of patientspecific reconstructions. An average Euclidean distance of 1.35 mm was obtained between the three selected landmarks on the approximation and the corresponding landmarks on the CT image. Conversely, a Euclidean distance of 5.99 mm was obtained between the three selected landmarks on the original SSM and corresponding landmarks on the CT image. The Euclidean distances confirm that a posterior model moves closer to the CT image, hence it reduces the search space for a more exact patient-specific 3D reconstruction by other fitting algorithms

    Structural health monitoring meets data mining

    Get PDF
    With the development of sensing and data processing techniques, monitoring physical systems in the field with a sensor network is becoming a feasible option for many domains. Such monitoring systems are referred to as Structural Health Monitoring (SHM) systems. By definition, SHM is the process of implementing a damage detection and characterisation strategy for engineering structures, which involves data collection, damage-sensitive feature extraction and statistical analysis. Most of the SHM process can be addressed by techniques from the Data Mining domain, so I conduct this research by combining these two fields. The monitoring system employed in this research is a sensor network installed on a Dutch highway bridge, which aims to monitor dynamic health aspects of the bridge and its long-term degradation. I have explored the specific focus of each sensor type under multiple scales, and analysed the dependencies between sensor types. Based on landmarks and constraints, I have proposed a novel predefined pattern detection method to select traffic events for modal analysis. I have analysed the influence of temperature and traffic mass on natural frequencies, and verified that natural frequencies decrease with temperature increases, but the influence of traffic mass is weaker than that of temperature.Chinese CSC Dutch STWAlgorithms and the Foundations of Software technolog

    Using landmarks as a deformation prior for hybrid image registration

    No full text
    corecore