627 research outputs found

    In-network Sparsity-regularized Rank Minimization: Algorithms and Applications

    Full text link
    Given a limited number of entries from the superposition of a low-rank matrix plus the product of a known fat compression matrix times a sparse matrix, recovery of the low-rank and sparse components is a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit. This paper develops algorithms for distributed sparsity-regularized rank minimization over networks, when the nuclear- and â„“1\ell_1-norm are used as surrogates to the rank and nonzero entry counts of the sought matrices, respectively. While nuclear-norm minimization has well-documented merits when centralized processing is viable, non-separability of the singular-value sum challenges its distributed minimization. To overcome this limitation, an alternative characterization of the nuclear norm is adopted which leads to a separable, yet non-convex cost minimized via the alternating-direction method of multipliers. The novel distributed iterations entail reduced-complexity per-node tasks, and affordable message passing among single-hop neighbors. Interestingly, upon convergence the distributed (non-convex) estimator provably attains the global optimum of its centralized counterpart, regardless of initialization. Several application domains are outlined to highlight the generality and impact of the proposed framework. These include unveiling traffic anomalies in backbone networks, predicting networkwide path latencies, and mapping the RF ambiance using wireless cognitive radios. Simulations with synthetic and real network data corroborate the convergence of the novel distributed algorithm, and its centralized performance guarantees.Comment: 30 pages, submitted for publication on the IEEE Trans. Signal Proces

    Adaptive Relaxed ADMM: Convergence Theory and Practical Implementation

    Full text link
    Many modern computer vision and machine learning applications rely on solving difficult optimization problems that involve non-differentiable objective functions and constraints. The alternating direction method of multipliers (ADMM) is a widely used approach to solve such problems. Relaxed ADMM is a generalization of ADMM that often achieves better performance, but its efficiency depends strongly on algorithm parameters that must be chosen by an expert user. We propose an adaptive method that automatically tunes the key algorithm parameters to achieve optimal performance without user oversight. Inspired by recent work on adaptivity, the proposed adaptive relaxed ADMM (ARADMM) is derived by assuming a Barzilai-Borwein style linear gradient. A detailed convergence analysis of ARADMM is provided, and numerical results on several applications demonstrate fast practical convergence.Comment: CVPR 201

    Signal Decomposition Using Masked Proximal Operators

    Full text link
    We consider the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse. We describe a simple and general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints). When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases. Summarizing and clarifying prior results, we give two distributed optimization methods for computing the decomposition, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. We derive tractable methods for evaluating the masked proximal operators of some loss functions that, to our knowledge, have not appeared in the literature.Comment: The manuscript has 61 pages, 22 figures and 2 tables. Also hosted at https://web.stanford.edu/~boyd/papers/sig_decomp_mprox.html. For code, see https://github.com/cvxgrp/signal-decompositio

    Robust Recognition using L1-Principal Component Analysis

    Get PDF
    The wide availability of visual data via social media and the internet, coupled with the demands of the security community have led to an increased interest in visual recognition. Recent research has focused on improving the accuracy of recognition techniques in environments where variability is well controlled. However, applications such as identity verification often operate in unconstrained environments. Therefore there is a need for more robust recognition techniques that can operate on data with considerable noise. Many statistical recognition techniques rely on principal component analysis (PCA). However, PCA suffers from the presence of outliers due to occlusions and noise often encountered in unconstrained settings. In this thesis we address this problem by using L1-PCA to minimize the effect of outliers in data. L1-PCA is applied to several statistical recognition techniques including eigenfaces and Grassmannian learning. Several popular face databases are used to show that L1-Grassmann manifolds not only outperform, but are also more robust to noise and occlusions than traditional L2-Grassmann manifolds for face and facial expression recognition. Additionally a high performance GPU implementation of L1-PCA is developed using CUDA that is several times faster than CPU implementations
    • …
    corecore