47,431 research outputs found

    Distance metric learning for medical image registration

    Get PDF
    Medical image registration has received considerable attention in medical imaging and computer vision, because of the large variety of ways in which it can impact patient care. Over the years, many algorithms have been proposed for medical image registration. Medical image registration uses techniques to create images of parts of the human body for clinical purposes. This thesis focuses on one small subset of registration algorithms: using machine learning techniques to train the similarity measure for use in medical image registration. This thesis is organized in the following manner. In Chapter 1 we introduce the idea of image registration, describe some some applications in medical imaging, and mathematically formulate the three main components of any registration problem: geometric transformation, similarity measure and optimization procedure. Finally we describe how the ideas in this thesis t into the eld of medical image registration, and we describe some related work. In Chapter 2 we introduce the concept of machine learning and we provide examples to illustrate machine learning algorithms. We then describe the knn-nearest neighbors algorithm and the relationship between Euclidean and Mahalanobis distance. Next we introduce distance metric learning and present two approaches for learning the Mahalanobis distance. Finally we provide a description and visual comparison of two algorithms for distance metric learning. In Chapter 3 we describe how distance metric learning can be applied to the problem of medical image registration. Our goal is to learn the optimal similarity measure given a training dataset of correctly registered images. To assess the performance of the two distance metric learning algorithms we test them using images from a series of patients. Moreover we illustrate the sensitivity of one of the learning algorithms by examining the variability of the resulting target registration errors. Finally we present our experimental results of registering CT and MR images. Finally in Chapter 4 we suggest some ideas for future work in order to improve our registration results and to speed up the algorithms

    Hessian-based Similarity Metric for Multimodal Medical Image Registration

    Full text link
    One of the fundamental elements of both traditional and certain deep learning medical image registration algorithms is measuring the similarity/dissimilarity between two images. In this work, we propose an analytical solution for measuring similarity between two different medical image modalities based on the Hessian of their intensities. First, assuming a functional dependence between the intensities of two perfectly corresponding patches, we investigate how their Hessians relate to each other. Secondly, we suggest a closed-form expression to quantify the deviation from this relationship, given arbitrary pairs of image patches. We propose a geometrical interpretation of the new similarity metric and an efficient implementation for registration. We demonstrate the robustness of the metric to intensity nonuniformities using synthetic bias fields. By integrating the new metric in an affine registration framework, we evaluate its performance for MRI and ultrasound registration in the context of image-guided neurosurgery using target registration error and computation time

    Methods for three-dimensional Registration of Multimodal Abdominal Image Data

    Get PDF
    Multimodal image registration benefits the diagnosis, treatment planning and the performance of image-guided procedures in the liver, since it enables the fusion of complementary information provided by pre- and intrainterventional data about tumor localization and access. Although there exist various registration methods, approaches which are specifically optimized for the registration of multimodal abdominal scans are only scarcely available. The work presented in this thesis aims to tackle this problem by focusing on the development, optimization and evaluation of registration methods specifically for the registration of multimodal liver scans. The contributions to the research field of medical image registration include the development of a registration evaluation methodology that enables the comparison and optimization of linear and non-linear registration algorithms using a point-based accuracy measure. This methodology has been used to benchmark standard registration methods as well as novel approaches that were developed within the frame of this thesis. The results of the methodology showed that the employed similarity measure used during the registration has a major impact on the registration accuracy of the method. Due to this influence, two alternative similarity metrics bearing the potential to be used on multimodal image data are proposed and evaluated. The first metric relies on the use of gradient information in form of Histograms of Oriented Gradients (HOG) whereas the second metric employs a siamese neural network to learn a similarity measure directly on the image data. The evaluation showed, that both metrics could compete with state of the art similarity measures in terms of registration accuracy. The HOG-metric offers the advantage that it does not require ground truth data to learn a similarity estimation, but instead it is applicable to various data sets with the sole requirement of distinct gradients. However, the Siamese metric is characterized by a higher robustness for large rotations than the HOG-metric. To train such a network, registered ground truth data is required which may be critical for multimodal image data. Yet, the results show that it is possible to apply models trained on registered synthetic data on real patient data. The last part of this thesis focuses on methods to learn an entire registration process using neural networks, thereby offering the advantage to replace the traditional, time-consuming iterative registration procedure. Within the frame of this thesis, the so-called VoxelMorph network which was originally proposed for monomodal, non-linear registration learning is extended for affine and multimodal registration learning tasks. This extension includes the consideration of an image mask during metric evaluation as well as loss functions for multimodal data, such as the pretrained Siamese metric and a loss relying on the comparison of deformation fields. Based on the developed registration evaluation methodology, the performance of the original network as well as the extended variants are evaluated for monomodal and multimodal registration tasks using multiple data sets. With the extended network variants, it is possible to learn an entire multimodal registration process for the correction of large image displacements. As for the Siamese metric, the results imply a general transferability of models trained with synthetic data to registration tasks including real patient data. Due to the lack of multimodal ground truth data, this transfer represents an important step towards making Deep Learning based registration procedures clinically usable

    Fast Predictive Multimodal Image Registration

    Get PDF
    We introduce a deep encoder-decoder architecture for image deformation prediction from multimodal images. Specifically, we design an image-patch-based deep network that jointly (i) learns an image similarity measure and (ii) the relationship between image patches and deformation parameters. While our method can be applied to general image registration formulations, we focus on the Large Deformation Diffeomorphic Metric Mapping (LDDMM) registration model. By predicting the initial momentum of the shooting formulation of LDDMM, we preserve its mathematical properties and drastically reduce the computation time, compared to optimization-based approaches. Furthermore, we create a Bayesian probabilistic version of the network that allows evaluation of registration uncertainty via sampling of the network at test time. We evaluate our method on a 3D brain MRI dataset using both T1- and T2-weighted images. Our experiments show that our method generates accurate predictions and that learning the similarity measure leads to more consistent registrations than relying on generic multimodal image similarity measures, such as mutual information. Our approach is an order of magnitude faster than optimization-based LDDMM.Comment: Accepted as a conference paper for ISBI 201
    • …
    corecore