594 research outputs found

    Diffeomorphic random sampling using optimal information transport

    Full text link
    In this article we explore an algorithm for diffeomorphic random sampling of nonuniform probability distributions on Riemannian manifolds. The algorithm is based on optimal information transport (OIT)---an analogue of optimal mass transport (OMT). Our framework uses the deep geometric connections between the Fisher-Rao metric on the space of probability densities and the right-invariant information metric on the group of diffeomorphisms. The resulting sampling algorithm is a promising alternative to OMT, in particular as our formulation is semi-explicit, free of the nonlinear Monge--Ampere equation. Compared to Markov Chain Monte Carlo methods, we expect our algorithm to stand up well when a large number of samples from a low dimensional nonuniform distribution is needed.Comment: 8 pages, 3 figure

    Diffeomorphic density registration

    Full text link
    In this book chapter we study the Riemannian Geometry of the density registration problem: Given two densities (not necessarily probability densities) defined on a smooth finite dimensional manifold find a diffeomorphism which transforms one to the other. This problem is motivated by the medical imaging application of tracking organ motion due to respiration in Thoracic CT imaging where the fundamental physical property of conservation of mass naturally leads to modeling CT attenuation as a density. We will study the intimate link between the Riemannian metrics on the space of diffeomorphisms and those on the space of densities. We finally develop novel computationally efficient algorithms and demonstrate there applicability for registering RCCT thoracic imaging.Comment: 23 pages, 6 Figures, Chapter for a Book on Medical Image Analysi

    Diffeomorphic density matching by optimal information transport

    Full text link
    We address the following problem: given two smooth densities on a manifold, find an optimal diffeomorphism that transforms one density into the other. Our framework builds on connections between the Fisher-Rao information metric on the space of probability densities and right-invariant metrics on the infinite-dimensional manifold of diffeomorphisms. This optimal information transport, and modifications thereof, allows us to construct numerical algorithms for density matching. The algorithms are inherently more efficient than those based on optimal mass transport or diffeomorphic registration. Our methods have applications in medical image registration, texture mapping, image morphing, non-uniform random sampling, and mesh adaptivity. Some of these applications are illustrated in examples.Comment: 35 page

    Higher-Order Momentum Distributions and Locally Affine LDDMM Registration

    Full text link
    To achieve sparse parametrizations that allows intuitive analysis, we aim to represent deformation with a basis containing interpretable elements, and we wish to use elements that have the description capacity to represent the deformation compactly. To accomplish this, we introduce in this paper higher-order momentum distributions in the LDDMM registration framework. While the zeroth order moments previously used in LDDMM only describe local displacement, the first-order momenta that are proposed here represent a basis that allows local description of affine transformations and subsequent compact description of non-translational movement in a globally non-rigid deformation. The resulting representation contains directly interpretable information from both mathematical and modeling perspectives. We develop the mathematical construction of the registration framework with higher-order momenta, we show the implications for sparse image registration and deformation description, and we provide examples of how the parametrization enables registration with a very low number of parameters. The capacity and interpretability of the parametrization using higher-order momenta lead to natural modeling of articulated movement, and the method promises to be useful for quantifying ventricle expansion and progressing atrophy during Alzheimer's disease

    A Geometric Variational Approach to Bayesian Inference

    Get PDF
    We propose a novel Riemannian geometric framework for variational inference in Bayesian models based on the nonparametric Fisher-Rao metric on the manifold of probability density functions. Under the square-root density representation, the manifold can be identified with the positive orthant of the unit hypersphere in L2, and the Fisher-Rao metric reduces to the standard L2 metric. Exploiting such a Riemannian structure, we formulate the task of approximating the posterior distribution as a variational problem on the hypersphere based on the alpha-divergence. This provides a tighter lower bound on the marginal distribution when compared to, and a corresponding upper bound unavailable with, approaches based on the Kullback-Leibler divergence. We propose a novel gradient-based algorithm for the variational problem based on Frechet derivative operators motivated by the geometry of the Hilbert sphere, and examine its properties. Through simulations and real-data applications, we demonstrate the utility of the proposed geometric framework and algorithm on several Bayesian models

    Diffeomorphic Deformation via Sliced Wasserstein Distance Optimization for Cortical Surface Reconstruction

    Full text link
    Mesh deformation is a core task for 3D mesh reconstruction, but defining an efficient discrepancy between predicted and target meshes remains an open problem. A prevalent approach in current deep learning is the set-based approach which measures the discrepancy between two surfaces by comparing two randomly sampled point-clouds from the two meshes with Chamfer pseudo-distance. Nevertheless, the set-based approach still has limitations such as lacking a theoretical guarantee for choosing the number of points in sampled point-clouds, and the pseudo-metricity and the quadratic complexity of the Chamfer divergence. To address these issues, we propose a novel metric for learning mesh deformation. The metric is defined by sliced Wasserstein distance on meshes represented as probability measures that generalize the set-based approach. By leveraging probability measure space, we gain flexibility in encoding meshes using diverse forms of probability measures, such as continuous, empirical, and discrete measures via \textit{varifold} representation. After having encoded probability measures, we can compare meshes by using the sliced Wasserstein distance which is an effective optimal transport distance with linear computational complexity and can provide a fast statistical rate for approximating the surface of meshes. Furthermore, we employ a neural ordinary differential equation (ODE) to deform the input surface into the target shape by modeling the trajectories of the points on the surface. Our experiments on cortical surface reconstruction demonstrate that our approach surpasses other competing methods in multiple datasets and metrics

    Spectral Convergence of the connection Laplacian from random samples

    Full text link
    Spectral methods that are based on eigenvectors and eigenvalues of discrete graph Laplacians, such as Diffusion Maps and Laplacian Eigenmaps are often used for manifold learning and non-linear dimensionality reduction. It was previously shown by Belkin and Niyogi \cite{belkin_niyogi:2007} that the eigenvectors and eigenvalues of the graph Laplacian converge to the eigenfunctions and eigenvalues of the Laplace-Beltrami operator of the manifold in the limit of infinitely many data points sampled independently from the uniform distribution over the manifold. Recently, we introduced Vector Diffusion Maps and showed that the connection Laplacian of the tangent bundle of the manifold can be approximated from random samples. In this paper, we present a unified framework for approximating other connection Laplacians over the manifold by considering its principle bundle structure. We prove that the eigenvectors and eigenvalues of these Laplacians converge in the limit of infinitely many independent random samples. We generalize the spectral convergence results to the case where the data points are sampled from a non-uniform distribution, and for manifolds with and without boundary
    • …
    corecore