22,649 research outputs found

    Stable low-rank matrix recovery via null space properties

    Full text link
    The problem of recovering a matrix of low rank from an incomplete and possibly noisy set of linear measurements arises in a number of areas. In order to derive rigorous recovery results, the measurement map is usually modeled probabilistically. We derive sufficient conditions on the minimal amount of measurements ensuring recovery via convex optimization. We establish our results via certain properties of the null space of the measurement map. In the setting where the measurements are realized as Frobenius inner products with independent standard Gaussian random matrices we show that 10r(n1+n2)10 r (n_1 + n_2) measurements are enough to uniformly and stably recover an n1×n2n_1 \times n_2 matrix of rank at most rr. We then significantly generalize this result by only requiring independent mean-zero, variance one entries with four finite moments at the cost of replacing 1010 by some universal constant. We also study the case of recovering Hermitian rank-rr matrices from measurement matrices proportional to rank-one projectors. For mCrnm \geq C r n rank-one projective measurements onto independent standard Gaussian vectors, we show that nuclear norm minimization uniformly and stably reconstructs Hermitian rank-rr matrices with high probability. Next, we partially de-randomize this by establishing an analogous statement for projectors onto independent elements of a complex projective 4-designs at the cost of a slightly higher sampling rate mCrnlognm \geq C rn \log n. Moreover, if the Hermitian matrix to be recovered is known to be positive semidefinite, then we show that the nuclear norm minimization approach may be replaced by minimizing the 2\ell_2-norm of the residual subject to the positive semidefinite constraint. Then no estimate of the noise level is required a priori. We discuss applications in quantum physics and the phase retrieval problem.Comment: 26 page

    Sparse recovery on Euclidean Jordan algebras

    Get PDF
    This paper is concerned with the problem of sparse recovery on Euclidean Jordan algebra (SREJA), which includes the sparse signal recovery problem and the low-rank symmetric matrix recovery problem as special cases. We introduce the notions of restricted isometry property (RIP), null space property (NSP), and s-goodness for linear transformations in s-SREJA, all of which provide sufficient conditions for s-sparse recovery via the nuclear norm minimization on Euclidean Jordan algebra. Moreover, we show that both the s-goodness and the NSP are necessary and sufficient conditions for exact s-sparse recovery via the nuclear norm minimization on Euclidean Jordan algebra. Applying these characteristic properties, we establish the exact and stable recovery results for solving SREJA problems via nuclear norm minimization

    Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

    Full text link
    This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2, where xx is a vector, as well as the minimization of X+1/(2α)XF2||X||_*+1/(2\alpha)||X||_F^2, where XX is a matrix and X||X||_* and XF||X||_F are the nuclear and Frobenius norms of XX, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing x1||x||_1 and X||X||_* under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector x0x^0, minimizing x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2 returns (nearly) the same solution as minimizing x1||x||_1 almost whenever α10x0\alpha\ge 10||x^0||_\infty. The same relation also holds between minimizing X+1/(2α)XF2||X||_*+1/(2\alpha)||X||_F^2 and minimizing X||X||_* for recovering a (nearly) low-rank matrix X0X^0, if α10X02\alpha\ge 10||X^0||_2. Furthermore, we show that the linearized Bregman algorithm for minimizing x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2 subject to Ax=bAx=b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on AA. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author
    corecore