72 research outputs found

    Gaussian Process Morphable Models

    Get PDF
    Statistical shape models (SSMs) represent a class of shapes as a normal distribution of point variations, whose parameters are estimated from example shapes. Principal component analysis (PCA) is applied to obtain a low-dimensional representation of the shape variation in terms of the leading principal components. In this paper, we propose a generalization of SSMs, called Gaussian Process Morphable Models (GPMMs). We model the shape variations with a Gaussian process, which we represent using the leading components of its Karhunen-Loeve expansion. To compute the expansion, we make use of an approximation scheme based on the Nystrom method. The resulting model can be seen as a continuous analogon of an SSM. However, while for SSMs the shape variation is restricted to the span of the example data, with GPMMs we can define the shape variation using any Gaussian process. For example, we can build shape models that correspond to classical spline models, and thus do not require any example data. Furthermore, Gaussian processes make it possible to combine different models. For example, an SSM can be extended with a spline model, to obtain a model that incorporates learned shape characteristics, but is flexible enough to explain shapes that cannot be represented by the SSM. We introduce a simple algorithm for fitting a GPMM to a surface or image. This results in a non-rigid registration approach, whose regularization properties are defined by a GPMM. We show how we can obtain different registration schemes,including methods for multi-scale, spatially-varying or hybrid registration, by constructing an appropriate GPMM. As our approach strictly separates modelling from the fitting process, this is all achieved without changes to the fitting algorithm. We show the applicability and versatility of GPMMs on a clinical use case, where the goal is the model-based segmentation of 3D forearm images

    Intensity-based Choroidal Registration Using Regularized Block Matching

    Get PDF
    Detecting and monitoring changes in the human choroid play a crucial role in treating ocular diseases such as myopia. However, reliable segmentation of optical coherence tomography (OCT) images at the choroid-sclera interface (CSI) is notoriously difficult due to poor contrast, signal loss and OCT artefacts. In this paper we present blockwise registration of successive scans to improve stability also during complete loss of the CSI-signal. First, we formulated the problem as minimization of a regularized energy functional. Then, we tested our automated method for piecewise Intensity-based Choroidal rigid Registration using regularized block matching (ICR) on 20 OCT 3D-volume scan-rescan data set pairs. Finally, we used these data set pairs to determine the precision of our method, while the accuracy was determined by comparing our results with those using manually annotated scans

    Variational Image Registration Using Inhomogeneous Regularization

    Get PDF
    We present a generalization of the convolution-based variational image registration approach, in which different regularizers can be implemented by conveniently exchanging the convolution kernel, even if it is nonseparable or nonstationary. Nonseparable kernels pose a challenge because they cannot be efficiently implemented by separate 1D convolutions. We propose to use a low-rank tensor decomposition to efficiently approximate nonseparable convolution. Nonstationary kernels pose an even greater challenge because the convolution kernel depends on, and needs to be evaluated for, every point in the image. We propose to pre-compute the local kernels and efficiently store them in memory using the Tucker tensor decomposition model. In our experiments we use the nonseparable exponential kernel and a nonstationary landmark kernel. The exponential kernel replicates desirable properties of elastic image registration, while the landmark kernel incorporates local prior knowledge about corresponding points in the images. We examine the trade-off between the computational resources needed and the approximation accuracy of the tensor decomposition methods. Furthermore, we obtain very smooth displacement fields even in the presence of large landmark displacements

    Inter-fractional Respiratory Motion Modelling from Abdominal Ultrasound: A Feasibility Study

    Get PDF
    Motion management strategies are crucial for radiotherapy of mobile tumours in order to ensure proper target coverage, save organs at risk and prevent interplay effects. We present a feasibility study for an inter-fractional, patient-specific motion model targeted at active beam scanning proton therapy. The model is designed to predict dense lung motion information from 2D abdominal ultrasound images. In a pretreatment phase, simultaneous ultrasound and magnetic resonance imaging are used to build a regression model. During dose delivery, abdominal ultrasound imaging serves as a surrogate for lung motion prediction. We investigated the performance of the motion model on five volunteer datasets. In two cases, the ultrasound probe was replaced after the volunteer has stood up between two imaging sessions. The overall mean prediction error is 2.9 mm and 3.4 mm after repositioning and therefore within a clinically acceptable range. These results suggest that the ultrasound-based regression model is a promising approach for inter-fractional motion management in radiotherapy

    Dose-compatible grating-based phase-contrast mammography on mastectomy specimens using a compact synchrotron source

    Get PDF
    With the introduction of screening mammography, the mortality rate of breast cancer has been reduced throughout the last decades. However, many women undergo unnecessary subsequent examinations due to inconclusive diagnoses from mammography. Two pathways appear especially promising to reduce the number of false-positive diagnoses. In a clinical study, mammography using synchrotron radiation was able to clarify the diagnosis in the majority of inconclusive cases. The second highly valued approach focuses on the application of phase-sensitive techniques such as grating-based phasecontrast and dark-field imaging. Feasibility studies have demonstrated a promising enhancement of diagnostic content, but suffer from dose concerns. Here we present dose-compatible grating-based phase-contrast and dark-field images as well as conventional absorption images acquired with monochromatic x-rays from a compact synchrotron source based on inverse Compton scattering. Images of freshly dissected mastectomy specimens show improved diagnostic content over ex-vivo clinical mammography images at lower or equal dose. We demonstrate increased contrast-to-noise ratio for monochromatic over clinical images for a well-defined phantom. Compact synchrotron sources could potentially serve as a clinical second level examination

    Object segmentation by fitting statistical shape models : a Kernel-based approach with application to wisdom tooth segmentation from CBCT images

    Get PDF
    Image segmentation is an important and challenging task in medical image analysis. Especially from low-quality images, segmentation algorithms have to cope with misleading background clutter, insufficient object boundaries and noise in the image. Statistical shape models are a powerful tool to tackle these problems. However, their construction as well as their application for segmentation remain challenging. In this thesis, we focus on the wisdom-tooth shape and its segmentation from Cone Beam Computed Tomography images. The large shape variation leads to difficult registration problems and an often too restrictive shape model, while the challenging appearance of the wisdom tooth makes the model fitting difficult. To tackle these problems, we follow on kernel-based approaches to registration and shape modeling. We introduce a kernel, which considers landmarks as an additional prior in image registration. This allows to locally improve the registration accuracy. We present a Demons-like registration method with an inhomogeneous regularization which allows to apply such a landmark kernel. For modeling the shape variation, we construct a kernel comprising a generic smoothness and an empirical sample covariance. With this combined kernel, we increase the flexibility of the statistical shape model. We make use of a reproducing kernel Hilbert space framework for registration, where we apply this combined kernel as reproducing kernel. To make the approach computationally feasible, we perform a low-rank approximation of the specific kernel function. Because of a heterogeneous appearance inside the wisdom tooth, fitting the statistical model to plain intensity images is difficult. We build a nonparametric appearance model, based on random forest regression, which abstracts the raw images to semantic probability maps. Hence, the misleading structures become semantic values, which greatly simplificates the shape model fitting

    Variational image registration using inhomogeneous regularization

    Get PDF
    We present a generalization of the convolution basedvariational image registration approach, in which differentregularizers can be implemented by conveniently exchangingthe convolution kernel, even if it is nonseparableor nonstationary. Nonseparable kernels pose a challenge becausethey cannot be efficiently implemented by separate1D convolutions. We propose to use a low-rank tensor decompositionto efficiently approximate nonseparable convolution.Nonstationary kernels pose an even greater challengebecause the convolution kernel depends on, and needs tobe evaluated for, every point in the image. We propose topre-compute the local kernels and efficiently store them inmemory using the Tucker tensor decomposition model. Inour experiments we use the nonseparable exponential kerneland a nonstationary landmark kernel. The exponential kernelreplicates desirable properties of elastic image registration,while the landmark kernel incorporates local prior knowledgeabout corresponding points in the images.We examinethe trade-off between the computational resources neededand the approximation accuracy of the tensor decompositionmethods. Furthermore, we obtain very smooth displacementfields even in the presence of large landmark displacements
    • …
    corecore