1,856 research outputs found

    Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

    Full text link
    The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization

    Random matrix theory and symmetric spaces

    Full text link
    In this review we discuss the relationship between random matrix theories and symmetric spaces. We show that the integration manifolds of random matrix theories, the eigenvalue distribution, and the Dyson and boundary indices characterizing the ensembles are in strict correspondence with symmetric spaces and the intrinsic characteristics of their restricted root lattices. Several important results can be obtained from this identification. In particular the Cartan classification of triplets of symmetric spaces with positive, zero and negative curvature gives rise to a new classification of random matrix ensembles. The review is organized into two main parts. In Part I the theory of symmetric spaces is reviewed with particular emphasis on the ideas relevant for appreciating the correspondence with random matrix theories. In Part II we discuss various applications of symmetric spaces to random matrix theories and in particular the new classification of disordered systems derived from the classification of symmetric spaces. We also review how the mapping from integrable Calogero--Sutherland models to symmetric spaces can be used in the theory of random matrices, with particular consequences for quantum transport problems. We conclude indicating some interesting new directions of research based on these identifications.Comment: 161 pages, LaTeX, no figures. Revised version with major additions in the second part of the review. Version accepted for publication on Physics Report

    Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

    Full text link
    Given the superposition of a low-rank matrix plus the product of a known fat compression matrix times a sparse matrix, the goal of this paper is to establish deterministic conditions under which exact recovery of the low-rank and sparse components becomes possible. This fundamental identifiability issue arises with traffic anomaly detection in backbone networks, and subsumes compressed sensing as well as the timely low-rank plus sparse matrix recovery tasks encountered in matrix decomposition problems. Leveraging the ability of â„“1\ell_1- and nuclear norms to recover sparse and low-rank matrices, a convex program is formulated to estimate the unknowns. Analysis and simulations confirm that the said convex program can recover the unknowns for sufficiently low-rank and sparse enough components, along with a compression matrix possessing an isometry property when restricted to operate on sparse vectors. When the low-rank, sparse, and compression matrices are drawn from certain random ensembles, it is established that exact recovery is possible with high probability. First-order algorithms are developed to solve the nonsmooth convex optimization problem with provable iteration complexity guarantees. Insightful tests with synthetic and real network data corroborate the effectiveness of the novel approach in unveiling traffic anomalies across flows and time, and its ability to outperform existing alternatives.Comment: 38 pages, submitted to the IEEE Transactions on Information Theor

    Multilinear Subspace Clustering

    Full text link
    In this paper we present a new model and an algorithm for unsupervised clustering of 2-D data such as images. We assume that the data comes from a union of multilinear subspaces (UOMS) model, which is a specific structured case of the much studied union of subspaces (UOS) model. For segmentation under this model, we develop Multilinear Subspace Clustering (MSC) algorithm and evaluate its performance on the YaleB and Olivietti image data sets. We show that MSC is highly competitive with existing algorithms employing the UOS model in terms of clustering performance while enjoying improvement in computational complexity

    Stable Principal Component Pursuit

    Get PDF
    In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.Comment: 5-page paper submitted to ISIT 201
    • …
    corecore