498 research outputs found

    Successive Concave Sparsity Approximation for Compressed Sensing

    Full text link
    In this paper, based on a successively accuracy-increasing approximation of the â„“0\ell_0 norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the â„“0\ell_0 norm can be controlled. We prove that the series of the approximations asymptotically coincides with the â„“1\ell_1 and â„“0\ell_0 norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed which leads to a number of weighted â„“1\ell_1 minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.Comment: Submitted to IEEE Trans. on Signal Processin

    Comparison of several reweighted l1-algorithms for solving cardinality minimization problems

    Full text link
    Reweighted l1-algorithms have attracted a lot of attention in the field of applied mathematics. A unified framework of such algorithms has been recently proposed by Zhao and Li. In this paper we construct a few new examples of reweighted l1-methods. These functions are certain concave approximations of the l0-norm function. We focus on the numerical comparison between some new and existing reweighted l1-algorithms. We show how the change of parameters in reweighted algorithms may affect the performance of the algorithms for finding the solution of the cardinality minimization problem. In our experiments, the problem data were generated according to different statistical distributions, and we test the algorithms on different sparsity level of the solution of the problem. Our numerical results demonstrate that the reweighted l1-method is one of the efficient methods for locating the solution of the cardinality minimization problem

    Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices

    Full text link
    Inspired by several recent developments in regularization theory, optimization, and signal processing, we present and analyze a numerical approach to multi-penalty regularization in spaces of sparsely represented functions. The sparsity prior is motivated by the largely expected geometrical/structured features of high-dimensional data, which may not be well-represented in the framework of typically more isotropic Hilbert spaces. In this paper, we are particularly interested in regularizers which are able to correctly model and separate the multiple components of additively mixed signals. This situation is rather common as pure signals may be corrupted by additive noise. To this end, we consider a regularization functional composed by a data-fidelity term, where signal and noise are additively mixed, a non-smooth and non-convex sparsity promoting term, and a penalty term to model the noise. We propose and analyze the convergence of an iterative alternating algorithm based on simple iterative thresholding steps to perform the minimization of the functional. By means of this algorithm, we explore the effect of choosing different regularization parameters and penalization norms in terms of the quality of recovering the pure signal and separating it from additive noise. For a given fixed noise level numerical experiments confirm a significant improvement in performance compared to standard one-parameter regularization methods. By using high-dimensional data analysis methods such as Principal Component Analysis, we are able to show the correct geometrical clustering of regularized solutions around the expected solution. Eventually, for the compressive sensing problems considered in our experiments we provide a guideline for a choice of regularization norms and parameters.Comment: 32 page

    Inference for Generalized Linear Models via Alternating Directions and Bethe Free Energy Minimization

    Full text link
    Generalized Linear Models (GLMs), where a random vector x\mathbf{x} is observed through a noisy, possibly nonlinear, function of a linear transform z=Ax\mathbf{z}=\mathbf{Ax} arise in a range of applications in nonlinear filtering and regression. Approximate Message Passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A\mathbf{A}. However, the algorithms can easily diverge for general A\mathbf{A}. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe Free Energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the Alternating Direction Method of Multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minima of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well

    On Known-Plaintext Attacks to a Compressed Sensing-based Encryption: A Quantitative Analysis

    Get PDF
    Despite the linearity of its encoding, compressed sensing may be used to provide a limited form of data protection when random encoding matrices are used to produce sets of low-dimensional measurements (ciphertexts). In this paper we quantify by theoretical means the resistance of the least complex form of this kind of encoding against known-plaintext attacks. For both standard compressed sensing with antipodal random matrices and recent multiclass encryption schemes based on it, we show how the number of candidate encoding matrices that match a typical plaintext-ciphertext pair is so large that the search for the true encoding matrix inconclusive. Such results on the practical ineffectiveness of known-plaintext attacks underlie the fact that even closely-related signal recovery under encoding matrix uncertainty is doomed to fail. Practical attacks are then exemplified by applying compressed sensing with antipodal random matrices as a multiclass encryption scheme to signals such as images and electrocardiographic tracks, showing that the extracted information on the true encoding matrix from a plaintext-ciphertext pair leads to no significant signal recovery quality increase. This theoretical and empirical evidence clarifies that, although not perfectly secure, both standard compressed sensing and multiclass encryption schemes feature a noteworthy level of security against known-plaintext attacks, therefore increasing its appeal as a negligible-cost encryption method for resource-limited sensing applications.Comment: IEEE Transactions on Information Forensics and Security, accepted for publication. Article in pres
    • …
    corecore