20 research outputs found

    A survey and comparison of contemporary algorithms for computing the matrix geometric mean

    Get PDF
    In this paper we present a survey of various algorithms for computing matrix geometric means and derive new second-order optimization algorithms to compute the Karcher mean. These new algorithms are constructed using the standard definition of the Riemannian Hessian. The survey includes the ALM list of desired properties for a geometric mean, the analytical expression for the mean of two matrices, algorithms based on the centroid computation in Euclidean (flat) space, and Riemannian optimization techniques to compute the Karcher mean (preceded by a short introduction into differential geometry). A change of metric is considered in the optimization techniques to reduce the complexity of the structures used in these algorithms. Numerical experiments are presented to compare the existing and the newly developed algorithms. We conclude that currently first-order algorithms are best suited for this optimization problem as the size and/or number of the matrices increase. Copyright © 2012, Kent State University

    Geometric matrix midranges

    Get PDF
    We define geometric matrix midranges for positive definite Hermitian matrices and study the midrange problem from a number of perspectives. Special attention is given to the midrange of two positive definite matrices before considering the extension of the problem to N>2N > 2 matrices. We compare matrix midrange statistics with the scalar and vector midrange problem and note the special significance of the matrix problem from a computational standpoint. We also study various aspects of geometric matrix midrange statistics from the viewpoint of linear algebra, differential geometry and convex optimization.ECH2020 EUROPEAN RESEARCH COUNCIL (ERC) (670645

    Fusing Kernels using Geometric Mean of Kernel Matrices

    No full text
    status: publishe

    Computing the matrix geometric mean: Riemannian versus Euclidean conditioning, implementation techniques, and a Riemannian BFGS method

    No full text
    This paper addresses the problem of computing the Riemannian center of mass of a collection of symmetric positive definite matrices. We show in detail that the condition number of the Riemannian Hessian of the underlying optimization problem is never very ill conditioned in practice, which explains why the Riemannian steepest descent approach has been observed to perform well. We also show theoretically and empirically that this property is not shared by the Euclidean Hessian. We then present a limited-memory Riemannian BFGS method to handle this computational task. We also provide methods to produce efficient numerical representations of geometric objects that are required for Riemannian optimization methods on the manifold of symmetric positive definite matrices. Through empirical results and a computational complexity analysis, we demonstrate the robust behavior of the limited-memory Riemannian BFGS method and the efficiency of our implementation when compared to state-of-the-art algorithms

    Estimating the Condition Number of the Fréchet Derivative of a Matrix Function

    No full text
    corecore