775 research outputs found

    Bridge Simulation and Metric Estimation on Landmark Manifolds

    Full text link
    We present an inference algorithm and connected Monte Carlo based estimation procedures for metric estimation from landmark configurations distributed according to the transition distribution of a Riemannian Brownian motion arising from the Large Deformation Diffeomorphic Metric Mapping (LDDMM) metric. The distribution possesses properties similar to the regular Euclidean normal distribution but its transition density is governed by a high-dimensional PDE with no closed-form solution in the nonlinear case. We show how the density can be numerically approximated by Monte Carlo sampling of conditioned Brownian bridges, and we use this to estimate parameters of the LDDMM kernel and thus the metric structure by maximum likelihood

    Annual Research Report 2021

    Get PDF

    Automatic differentiation in machine learning: a survey

    Get PDF
    Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure

    Fast and Robust Shortest Paths on Manifolds Learned from Data

    Full text link
    We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data. This amounts to solving a system of ordinary differential equations (ODEs) subject to boundary conditions. Here standard solvers perform poorly because they require well-behaved Jacobians of the ODE, and usually, manifolds learned from data imply unstable and ill-conditioned Jacobians. Instead, we propose a fixed-point iteration scheme for solving the ODE that avoids Jacobians. This enhances the stability of the solver, while reduces the computational cost. In experiments involving both Riemannian metric learning and deep generative models we demonstrate significant improvements in speed and stability over both general-purpose state-of-the-art solvers as well as over specialized solvers.Comment: Accepted at Artificial Intelligence and Statistics (AISTATS) 201

    An Infinitesimal Probabilistic Model for Principal Component Analysis of Manifold Valued Data

    Full text link
    We provide a probabilistic and infinitesimal view of how the principal component analysis procedure (PCA) can be generalized to analysis of nonlinear manifold valued data. Starting with the probabilistic PCA interpretation of the Euclidean PCA procedure, we show how PCA can be generalized to manifolds in an intrinsic way that does not resort to linearization of the data space. The underlying probability model is constructed by mapping a Euclidean stochastic process to the manifold using stochastic development of Euclidean semimartingales. The construction uses a connection and bundles of covariant tensors to allow global transport of principal eigenvectors, and the model is thereby an example of how principal fiber bundles can be used to handle the lack of global coordinate system and orientations that characterizes manifold valued statistics. We show how curvature implies non-integrability of the equivalent of Euclidean principal subspaces, and how the stochastic flows provide an alternative to explicit construction of such subspaces. We describe estimation procedures for inference of parameters and prediction of principal components, and we give examples of properties of the model on embedded surfaces
    • …
    corecore