22 research outputs found

    Intrasubject multimodal groupwise registration with the conditional template entropy

    Get PDF
    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information

    Advances in Groupwise Image Registration

    Get PDF

    Advances in Groupwise Image Registration

    Get PDF

    BInGo: Bayesian Intrinsic Groupwise Registration via Explicit Hierarchical Disentanglement

    Full text link
    Multimodal groupwise registration aligns internal structures in a group of medical images. Current approaches to this problem involve developing similarity measures over the joint intensity profile of all images, which may be computationally prohibitive for large image groups and unstable under various conditions. To tackle these issues, we propose BInGo, a general unsupervised hierarchical Bayesian framework based on deep learning, to learn intrinsic structural representations to measure the similarity of multimodal images. Particularly, a variational auto-encoder with a novel posterior is proposed, which facilitates the disentanglement learning of structural representations and spatial transformations, and characterizes the imaging process from the common structure with shape transition and appearance variation. Notably, BInGo is scalable to learn from small groups, whereas being tested for large-scale groupwise registration, thus significantly reducing computational costs. We compared BInGo with five iterative or deep learning methods on three public intrasubject and intersubject datasets, i.e. BraTS, MS-CMR of the heart, and Learn2Reg abdomen MR-CT, and demonstrated its superior accuracy and computational efficiency, even for very large group sizes (e.g., over 1300 2D images from MS-CMR in each group)

    X\mathcal{X}-Metric: An N-Dimensional Information-Theoretic Framework for Groupwise Registration and Deep Combined Computing

    Full text link
    This paper presents a generic probabilistic framework for estimating the statistical dependency and finding the anatomical correspondences among an arbitrary number of medical images. The method builds on a novel formulation of the NN-dimensional joint intensity distribution by representing the common anatomy as latent variables and estimating the appearance model with nonparametric estimators. Through connection to maximum likelihood and the expectation-maximization algorithm, an information\hyp{}theoretic metric called X\mathcal{X}-metric and a co-registration algorithm named X\mathcal{X}-CoReg are induced, allowing groupwise registration of the NN observed images with computational complexity of O(N)\mathcal{O}(N). Moreover, the method naturally extends for a weakly-supervised scenario where anatomical labels of certain images are provided. This leads to a combined\hyp{}computing framework implemented with deep learning, which performs registration and segmentation simultaneously and collaboratively in an end-to-end fashion. Extensive experiments were conducted to demonstrate the versatility and applicability of our model, including multimodal groupwise registration, motion correction for dynamic contrast enhanced magnetic resonance images, and deep combined computing for multimodal medical images. Results show the superiority of our method in various applications in terms of both accuracy and efficiency, highlighting the advantage of the proposed representation of the imaging process

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    Variational Registration of Multiple Images with the SVD based SqN Distance Measure

    Full text link
    Image registration, especially the quantification of image similarity, is an important task in image processing. Various approaches for the comparison of two images are discussed in the literature. However, although most of these approaches perform very well in a two image scenario, an extension to a multiple images scenario deserves attention. In this article, we discuss and compare registration methods for multiple images. Our key assumption is, that information about the singular values of a feature matrix of images can be used for alignment. We introduce, discuss and relate three recent approaches from the literature: the Schatten q-norm based SqN distance measure, a rank based approach, and a feature volume based approach. We also present results for typical applications such as dynamic image sequences or stacks of histological sections. Our results indicate that the SqN approach is in fact a suitable distance measure for image registration. Moreover, our examples also indicate that the results obtained by SqN are superior to those obtained by its competitors.Comment: 12 pages, 5 figures, accepted at the conference "Scale Space and Variational Methods" in Hofgeismar, Germany 201

    Reduction of motion effects in myocardial arterial spin labeling

    Get PDF
    Purpose To evaluate the accuracy and reproducibility of myocardial blood flow measurements obtained under different breathing strategies and motion correction techniques with arterial spin labeling. Methods A prospective cardiac arterial spin labeling study was performed in 12 volunteers at 3 Tesla. Perfusion images were acquired twice under breath-hold, synchronized-breathing, and free-breathing. Motion detection based on the temporal intensity variation of a myocardial voxel, as well as image registration based on pairwise and groupwise approaches, were applied and evaluated in synthetic and in vivo data. A region of interest was drawn over the mean perfusion-weighted image for quantification. Original breath-hold datasets, analyzed with individual regions of interest for each perfusion-weighted image, were considered as reference values. Results Perfusion measurements in the reference breath-hold datasets were in line with those reported in literature. In original datasets, prior to motion correction, myocardial blood flow quantification was significantly overestimated due to contamination of the myocardial perfusion with the high intensity signal of blood pool. These effects were minimized with motion detection or registration. Synthetic data showed that accuracy of the perfusion measurements was higher with the use of registration, in particular after the pairwise approach, which probed to be more robust to motion. Conclusion Satisfactory results were obtained for the free-breathing strategy after pairwise registration, with higher accuracy and robustness (in synthetic datasets) and higher intrasession reproducibility together with lower myocardial blood flow variability across subjects (in in vivo datasets). Breath-hold and synchronized-breathing after motion correction provided similar results, but these breathing strategies can be difficult to perform by patients

    Efficient convolution-based pairwise elastic image registration on three multimodal similarity metrics

    Get PDF
    Producción CientíficaThis paper proposes a complete convolutional formulation for 2D multimodal pairwise image registration problems based on free-form deformations. We have reformulated in terms of discrete 1D convolutions the evaluation of spatial transformations, the regularization term, and their gradients for three different multimodal registration metrics, namely, normalized cross correlation, mutual information, and normalized mutual information. A sufficient condition on the metric gradient is provided for further extension to other metrics. The proposed approach has been tested, as a proof of concept, on contrast-enhanced first-pass perfusion cardiac magnetic resonance images. Execution times have been compared with the corresponding execution times of the classical tensor product formulation, both on CPU and GPU. The speed-up achieved by using convolutions instead of tensor products depends on the image size and the number of control points considered, the larger those magnitudes, the greater the execution time reduction. Furthermore, the speed-up will be more significant when gradient operations constitute the major bottleneck in the optimization process.Ministerio de Economía, Industria y Competitividad (grants TEC2017-82408-R and PID2020-115339RB-I00)ESAOTE Ltd (grant 18IQBM
    corecore