6,215 research outputs found

    In-network Sparsity-regularized Rank Minimization: Algorithms and Applications

    Full text link
    Given a limited number of entries from the superposition of a low-rank matrix plus the product of a known fat compression matrix times a sparse matrix, recovery of the low-rank and sparse components is a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit. This paper develops algorithms for distributed sparsity-regularized rank minimization over networks, when the nuclear- and 1\ell_1-norm are used as surrogates to the rank and nonzero entry counts of the sought matrices, respectively. While nuclear-norm minimization has well-documented merits when centralized processing is viable, non-separability of the singular-value sum challenges its distributed minimization. To overcome this limitation, an alternative characterization of the nuclear norm is adopted which leads to a separable, yet non-convex cost minimized via the alternating-direction method of multipliers. The novel distributed iterations entail reduced-complexity per-node tasks, and affordable message passing among single-hop neighbors. Interestingly, upon convergence the distributed (non-convex) estimator provably attains the global optimum of its centralized counterpart, regardless of initialization. Several application domains are outlined to highlight the generality and impact of the proposed framework. These include unveiling traffic anomalies in backbone networks, predicting networkwide path latencies, and mapping the RF ambiance using wireless cognitive radios. Simulations with synthetic and real network data corroborate the convergence of the novel distributed algorithm, and its centralized performance guarantees.Comment: 30 pages, submitted for publication on the IEEE Trans. Signal Proces

    Schatten-pp Quasi-Norm Regularized Matrix Optimization via Iterative Reweighted Singular Value Minimization

    Full text link
    In this paper we study general Schatten-pp quasi-norm (SPQN) regularized matrix minimization problems. In particular, we first introduce a class of first-order stationary points for them, and show that the first-order stationary points introduced in [11] for an SPQN regularized vectorvector minimization problem are equivalent to those of an SPQN regularized matrixmatrix minimization reformulation. We also show that any local minimizer of the SPQN regularized matrix minimization problems must be a first-order stationary point. Moreover, we derive lower bounds for nonzero singular values of the first-order stationary points and hence also of the local minimizers of the SPQN regularized matrix minimization problems. The iterative reweighted singular value minimization (IRSVM) methods are then proposed to solve these problems, whose subproblems are shown to have a closed-form solution. In contrast to the analogous methods for the SPQN regularized vectorvector minimization problems, the convergence analysis of these methods is significantly more challenging. We develop a novel approach to establishing the convergence of these methods, which makes use of the expression of a specific solution of their subproblems and avoids the intricate issue of finding the explicit expression for the Clarke subdifferential of the objective of their subproblems. In particular, we show that any accumulation point of the sequence generated by the IRSVM methods is a first-order stationary point of the problems. Our computational results demonstrate that the IRSVM methods generally outperform some recently developed state-of-the-art methods in terms of solution quality and/or speed.Comment: This paper has been withdrawn by the author due to major revision and correction

    KYP Lemma for Non-Strict Inequalities and the associated Minimax Theorem

    Full text link
    Several variations of the classical Kalman-Yakubovich-Popov Lemma, as well the associated minimax theorem are presented.Comment: 24 page
    corecore