71 research outputs found

    X\mathcal{X}-Metric: An N-Dimensional Information-Theoretic Framework for Groupwise Registration and Deep Combined Computing

    Full text link
    This paper presents a generic probabilistic framework for estimating the statistical dependency and finding the anatomical correspondences among an arbitrary number of medical images. The method builds on a novel formulation of the NN-dimensional joint intensity distribution by representing the common anatomy as latent variables and estimating the appearance model with nonparametric estimators. Through connection to maximum likelihood and the expectation-maximization algorithm, an information\hyp{}theoretic metric called X\mathcal{X}-metric and a co-registration algorithm named X\mathcal{X}-CoReg are induced, allowing groupwise registration of the NN observed images with computational complexity of O(N)\mathcal{O}(N). Moreover, the method naturally extends for a weakly-supervised scenario where anatomical labels of certain images are provided. This leads to a combined\hyp{}computing framework implemented with deep learning, which performs registration and segmentation simultaneously and collaboratively in an end-to-end fashion. Extensive experiments were conducted to demonstrate the versatility and applicability of our model, including multimodal groupwise registration, motion correction for dynamic contrast enhanced magnetic resonance images, and deep combined computing for multimodal medical images. Results show the superiority of our method in various applications in terms of both accuracy and efficiency, highlighting the advantage of the proposed representation of the imaging process

    BInGo: Bayesian Intrinsic Groupwise Registration via Explicit Hierarchical Disentanglement

    Full text link
    Multimodal groupwise registration aligns internal structures in a group of medical images. Current approaches to this problem involve developing similarity measures over the joint intensity profile of all images, which may be computationally prohibitive for large image groups and unstable under various conditions. To tackle these issues, we propose BInGo, a general unsupervised hierarchical Bayesian framework based on deep learning, to learn intrinsic structural representations to measure the similarity of multimodal images. Particularly, a variational auto-encoder with a novel posterior is proposed, which facilitates the disentanglement learning of structural representations and spatial transformations, and characterizes the imaging process from the common structure with shape transition and appearance variation. Notably, BInGo is scalable to learn from small groups, whereas being tested for large-scale groupwise registration, thus significantly reducing computational costs. We compared BInGo with five iterative or deep learning methods on three public intrasubject and intersubject datasets, i.e. BraTS, MS-CMR of the heart, and Learn2Reg abdomen MR-CT, and demonstrated its superior accuracy and computational efficiency, even for very large group sizes (e.g., over 1300 2D images from MS-CMR in each group)
    • …
    corecore