379 research outputs found

    Approximate Joint Diagonalization within the Riemannian Geometry Framework

    Get PDF
    International audienceWe consider the approximate joint diagonalization problem (AJD) related to the well known blind source separation (BSS) problem within the Riemannian geometry framework. We define a new manifold named special polar manifold equivalent to the set of full rank matrices with a unit determinant of their Gram matrix. The Riemannian trust-region optimization algorithm allows us to define a new method to solve the AJD problem. This method is compared to previously published NoJOB and UWEDGE algorithms by means of simulations and shows comparable performances. This Riemannian optimization approach thus shows promising results. Since it is also very flexible, it can be easily extended to block AJD or joint BSS

    Approximate joint diagonalization with Riemannian optimization on the general linear group

    Get PDF
    International audienceWe consider the classical problem of approximate joint diagonalization of matrices, which can be cast as an optimization problem on the general linear group. We propose a versatile Riemannian optimization framework for solving this problem-unifiying existing methods and creating new ones. We use two standard Riemannian metrics (left-and right-invariant metrics) having opposite features regarding the structure of solutions and the model. We introduce the Riemannian optimization tools (gradient, retraction, vector transport) in this context, for the two standard non-degeneracy constraints (oblique and non-holonomic constraints). We also develop tools beyond the classical Riemannian optimization framework to handle the non-Riemannian quotient manifold induced by the non-holonomic constraint with the right-invariant metric. We illustrate our theoretical developments with numerical experiments on both simulated data and a real electroencephalographic recording

    Spectral methods for multimodal data analysis

    Get PDF
    Spectral methods have proven themselves as an important and versatile tool in a wide range of problems in the fields of computer graphics, machine learning, pattern recognition, and computer vision, where many important problems boil down to constructing a Laplacian operator and finding a few of its eigenvalues and eigenfunctions. Classical examples include the computation of diffusion distances on manifolds in computer graphics, Laplacian eigenmaps, and spectral clustering in machine learning. In many cases, one has to deal with multiple data spaces simultaneously. For example, clustering multimedia data in machine learning applications involves various modalities or ``views'' (e.g., text and images), and finding correspondence between shapes in computer graphics problems is an operation performed between two or more modalities. In this thesis, we develop a generalization of spectral methods to deal with multiple data spaces and apply them to problems from the domains of computer graphics, machine learning, and image processing. Our main construction is based on simultaneous diagonalization of Laplacian operators. We present an efficient numerical technique for computing joint approximate eigenvectors of two or more Laplacians in challenging noisy scenarios, which also appears to be the first general non-smooth manifold optimization method. Finally, we use the relation between joint approximate diagonalizability and approximate commutativity of operators to define a structural similarity measure for images. We use this measure to perform structure-preserving color manipulations of a given image

    mSPD-NN: A Geometrically Aware Neural Framework for Biomarker Discovery from Functional Connectomics Manifolds

    Full text link
    Connectomics has emerged as a powerful tool in neuroimaging and has spurred recent advancements in statistical and machine learning methods for connectivity data. Despite connectomes inhabiting a matrix manifold, most analytical frameworks ignore the underlying data geometry. This is largely because simple operations, such as mean estimation, do not have easily computable closed-form solutions. We propose a geometrically aware neural framework for connectomes, i.e., the mSPD-NN, designed to estimate the geodesic mean of a collections of symmetric positive definite (SPD) matrices. The mSPD-NN is comprised of bilinear fully connected layers with tied weights and utilizes a novel loss function to optimize the matrix-normal equation arising from Fr\'echet mean estimation. Via experiments on synthetic data, we demonstrate the efficacy of our mSPD-NN against common alternatives for SPD mean estimation, providing competitive performance in terms of scalability and robustness to noise. We illustrate the real-world flexibility of the mSPD-NN in multiple experiments on rs-fMRI data and demonstrate that it uncovers stable biomarkers associated with subtle network differences among patients with ADHD-ASD comorbidities and healthy controls.Comment: Accepted into IPMI 202

    A Fixed-Point Algorithm for Estimating Power Means of Positive Definite Matrices

    No full text
    International audienceThe estimation of means of data points lying on the Riemannian manifold of symmetric positive-definite (SPD) matrices is of great utility in classification problems and is currently heavily studied. The power means of SPD matrices with exponent p in the interval [-1, 1] interpolate in between the Harmonic (p =-1) and the Arithmetic mean (p = 1), while the Geometric (Karcher) mean corresponds to their limit evaluated at 0. In this article we present a simple fixed point algorithm for estimating means along this whole continuum. The convergence rate of the proposed algorithm for p = ±0.5 deteriorates very little with the number and dimension of points given as input. Along the whole continuum it is also robust with respect to the dispersion of the points on the manifold. Thus, the proposed algorithm allows the efficient estimation of the whole family of power means, including the geometric mean
    • …
    corecore