93,315 research outputs found

    Regression on fixed-rank positive semidefinite matrices: a Riemannian approach

    Full text link
    The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixed-rank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks

    The Riemannian Geometry of Deep Generative Models

    Full text link
    Deep generative models learn a mapping from a low dimensional latent space to a high-dimensional data space. Under certain regularity conditions, these models parameterize nonlinear manifolds in the data space. In this paper, we investigate the Riemannian geometry of these generated manifolds. First, we develop efficient algorithms for computing geodesic curves, which provide an intrinsic notion of distance between points on the manifold. Second, we develop an algorithm for parallel translation of a tangent vector along a path on the manifold. We show how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point. Our experiments on real image data show that the manifolds learned by deep generative models, while nonlinear, are surprisingly close to zero curvature. The practical implication is that linear paths in the latent space closely approximate geodesics on the generated manifold. However, further investigation into this phenomenon is warranted, to identify if there are other architectures or datasets where curvature plays a more prominent role. We believe that exploring the Riemannian geometry of deep generative models, using the tools developed in this paper, will be an important step in understanding the high-dimensional, nonlinear spaces these models learn.Comment: 9 page

    GeĢomeĢtrie sur les distances et meilleure image euclidienne avec distances pondeĢreĢes

    Get PDF
    Distance Geometry Problem (DGP) and Nonlinear Mapping (NLM) are two well established questions: Distance Geometry Problem is about finding a Euclidean realization of an incomplete set of distances in a Euclidean space, whereas Nonlinear Mapping is a weighted Least Square Scaling (LSS) method. We show how all these methods (LSS, NLM, DGP) can be assembled in a common framework, being each identified as an instance of an optimization problem with a choice of a weight matrix. We study the continuity between the solutions (which are point clouds) when the weight matrix varies, and the compactness of the set of solutions (after centering). We finally study a numerical example, showing that solving the optimization problem is far from being simple and that the numerical solution for a given procedure may be trapped in a local minimum.Les domaines de geĢomeĢtrie sur les distances (distance geometry) et de recherche de meilleure image euclidienne avec distances pondeĢreĢes (nonlinear mapping) sont deux domaines classiques : il sā€™agit pour le premier de construire une isomeĢtrie dā€™un espace meĢtrique discret vers un nuage de points dans un espace euclidien, ne connaissant quā€™une partie des distances, et pour le second de construire un nuage avec la meilleure approximation des distances, avec pondeĢration. Nous montrons comment ces meĢthodes peuvent eĢ‚tre rassembleĢe en une meĢ‚me famille, chacune repreĢsentant un choix de pondeĢrations dans un probleĢ€me dā€™optimisation. On eĢtudie la continuiteĢ entre ces solutions (qui sont des nuages de points), et la compaciteĢ des ensembles de solutions (apreĢ€s centrage). On eĢtudie eĢgalement un exemple numeĢrique, montrant cependant que le prob- leĢ€me dā€™optimisation est loin dā€™eĢ‚tre simple, et que la proceĢdure dā€™optimisation peut facilement eĢ‚tre pieĢgeĢe dans un minimum local

    Intersubject Regularity in the Intrinsic Shape of Human V1

    Full text link
    Previous studies have reported considerable intersubject variability in the three-dimensional geometry of the human primary visual cortex (V1). Here we demonstrate that much of this variability is due to extrinsic geometric features of the cortical folds, and that the intrinsic shape of V1 is similar across individuals. V1 was imaged in ten ex vivo human hemispheres using high-resolution (200 Ī¼m) structural magnetic resonance imaging at high field strength (7 T). Manual tracings of the stria of Gennari were used to construct a surface representation, which was computationally flattened into the plane with minimal metric distortion. The instrinsic shape of V1 was determined from the boundary of the planar representation of the stria. An ellipse provided a simple parametric shape model that was a good approximation to the boundary of flattened V1. The aspect ration of the best-fitting ellipse was found to be consistent across subject, with a mean of 1.85 and standard deviation of 0.12. Optimal rigid alignment of size-normalized V1 produced greater overlap than that achieved by previous studies using different registration methods. A shape analysis of published macaque data indicated that the intrinsic shape of macaque V1 is also stereotyped, and similar to the human V1 shape. Previoud measurements of the functional boundary of V1 in human and macaque are in close agreement with these results
    • ā€¦
    corecore