112 research outputs found

    Riemannian pursuit for big matrix recovery

    Full text link
    Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved. Low rank matrix recovery is a fundamental task in many real-world applications. The perfor-mance of existing methods, however, deteriorates significantly when applied to ill-conditioned or large-scale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), that aims to address these two problems simultaneously. Our method consists of a sequence of fixed-rank optimization problems. Each subproblem, solved by a nonlinear Rieman-nian conjugate gradient method, aims to correct the solution in the most important subspace of increasing size. Theoretically, RP converges linearly under mild conditions and experimental results show that it substantially outperforms existing methods when applied to large-scale and ill-conditioned matrices

    A survey and comparison of contemporary algorithms for computing the matrix geometric mean

    Get PDF
    In this paper we present a survey of various algorithms for computing matrix geometric means and derive new second-order optimization algorithms to compute the Karcher mean. These new algorithms are constructed using the standard definition of the Riemannian Hessian. The survey includes the ALM list of desired properties for a geometric mean, the analytical expression for the mean of two matrices, algorithms based on the centroid computation in Euclidean (flat) space, and Riemannian optimization techniques to compute the Karcher mean (preceded by a short introduction into differential geometry). A change of metric is considered in the optimization techniques to reduce the complexity of the structures used in these algorithms. Numerical experiments are presented to compare the existing and the newly developed algorithms. We conclude that currently first-order algorithms are best suited for this optimization problem as the size and/or number of the matrices increase. Copyright © 2012, Kent State University

    Tensor completion in hierarchical tensor representations

    Full text link
    Compressed sensing extends from the recovery of sparse vectors from undersampled measurements via efficient algorithms to the recovery of matrices of low rank from incomplete information. Here we consider a further extension to the reconstruction of tensors of low multi-linear rank in recently introduced hierarchical tensor formats from a small number of measurements. Hierarchical tensors are a flexible generalization of the well-known Tucker representation, which have the advantage that the number of degrees of freedom of a low rank tensor does not scale exponentially with the order of the tensor. While corresponding tensor decompositions can be computed efficiently via successive applications of (matrix) singular value decompositions, some important properties of the singular value decomposition do not extend from the matrix to the tensor case. This results in major computational and theoretical difficulties in designing and analyzing algorithms for low rank tensor recovery. For instance, a canonical analogue of the tensor nuclear norm is NP-hard to compute in general, which is in stark contrast to the matrix case. In this book chapter we consider versions of iterative hard thresholding schemes adapted to hierarchical tensor formats. A variant builds on methods from Riemannian optimization and uses a retraction mapping from the tangent space of the manifold of low rank tensors back to this manifold. We provide first partial convergence results based on a tensor version of the restricted isometry property (TRIP) of the measurement map. Moreover, an estimate of the number of measurements is provided that ensures the TRIP of a given tensor rank with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral

    Orthorexic tendencies are linked with difficulties with emotion identification and regulation.

    Get PDF
    Background: Orthorexia nervosa (ON) is characterised by an unhealthy obsession with healthy eating and while it is not recognised as an eating disorder (or any disorder), current research is exploring similarities and differences with such disorders. The literature has shown that individuals with eating disorders have difficulties identifying and describing emotions (known as alexithymia) as well as regulating them. However no research to date has looked at whether people with orthorexic tendencies also suffer from difficulties with emotions. In this paper, we refer to people with orthorexic tendencies but do not assume that their healthy eating is at a pathological level needing clinical attention. Methods: The current study examined this by asking 196 healthy adults with an interest in healthy eating to complete four questionnaires to measure ON (ORTO-15 - reduced to ORTO-7CS), eating psychopathology (EAT-26), alexithymia (TAS-20) and emotion dysregulation (DERS-16). Results: We found that difficulties identifying and regulating emotions was associated with symptoms of ON, similar to what is found in other eating disorders. We suggest that ON behaviours may be used as a coping strategy in order to feel in control in these participants who have poor emotion regulation abilities. Conclusions: Our results show that individuals with ON tendencies may share similar difficulties with emotions compared to other eating disorders. While important, our results are limited by the way we measured ON behaviours and we recommend that further research replicate our findings once a better and more specific tool is developed and validated to screen for ON characteristics more accurately

    Geometric methods on low-rank matrix and tensor manifolds

    Get PDF
    In this chapter we present numerical methods for low-rank matrix and tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces. We focus on two types of problems: The first are optimization problems, like matrix and tensor completion, solving linear systems and eigenvalue problems. Such problems can be solved by numerical optimization for manifolds, called Riemannian optimization methods. We will explain the basic elements of differential geometry in order to apply such methods efficiently to rank constrained matrix and tensor spaces. The second type of problem is ordinary differential equations, defined on matrix and tensor spaces. We show how their solution can be approximated by the dynamical low-rank principle, and discuss several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors
    corecore