769 research outputs found

    Lorentzian Iterative Hard Thresholding: Robust Compressed Sensing with Prior Information

    Full text link
    Commonly employed reconstruction algorithms in compressed sensing (CS) use the L2L_2 norm as the metric for the residual error. However, it is well-known that least squares (LS) based estimators are highly sensitive to outliers present in the measurement vector leading to a poor performance when the noise no longer follows the Gaussian assumption but, instead, is better characterized by heavier-than-Gaussian tailed distributions. In this paper, we propose a robust iterative hard Thresholding (IHT) algorithm for reconstructing sparse signals in the presence of impulsive noise. To address this problem, we use a Lorentzian cost function instead of the L2L_2 cost function employed by the traditional IHT algorithm. We also modify the algorithm to incorporate prior signal information in the recovery process. Specifically, we study the case of CS with partially known support. The proposed algorithm is a fast method with computational load comparable to the LS based IHT, whilst having the advantage of robustness against heavy-tailed impulsive noise. Sufficient conditions for stability are studied and a reconstruction error bound is derived. We also derive sufficient conditions for stable sparse signal recovery with partially known support. Theoretical analysis shows that including prior support information relaxes the conditions for successful reconstruction. Simulation results demonstrate that the Lorentzian-based IHT algorithm significantly outperform commonly employed sparse reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments. Numerical results also demonstrate that the partially known support inclusion improves the performance of the proposed algorithm, thereby requiring fewer samples to yield an approximate reconstruction.Comment: 28 pages, 9 figures, accepted in IEEE Transactions on Signal Processin

    Jump-sparse and sparse recovery using Potts functionals

    Full text link
    We recover jump-sparse and sparse signals from blurred incomplete data corrupted by (possibly non-Gaussian) noise using inverse Potts energy functionals. We obtain analytical results (existence of minimizers, complexity) on inverse Potts functionals and provide relations to sparsity problems. We then propose a new optimization method for these functionals which is based on dynamic programming and the alternating direction method of multipliers (ADMM). A series of experiments shows that the proposed method yields very satisfactory jump-sparse and sparse reconstructions, respectively. We highlight the capability of the method by comparing it with classical and recent approaches such as TV minimization (jump-sparse signals), orthogonal matching pursuit, iterative hard thresholding, and iteratively reweighted â„“1\ell^1 minimization (sparse signals)

    Robust compressive sensing of sparse signals: A review

    Get PDF
    Compressive sensing generally relies on the L2-norm for data fidelity, whereas in many applications robust estimators are needed. Among the scenarios in which robust performance is required, applications where the sampling process is performed in the presence of impulsive noise, i.e. measurements are corrupted by outliers, are of particular importance. This article overviews robust nonlinear reconstruction strategies for sparse signals based on replacing the commonly used L2-norm by M-estimators as data fidelity functions. The derived methods outperform existing compressed sensing techniques in impulsive environments, while achieving good performance in light-tailed environments, thus offering a robust framework for CS

    Harnessing the Power of Sample Abundance: Theoretical Guarantees and Algorithms for Accelerated One-Bit Sensing

    Full text link
    One-bit quantization with time-varying sampling thresholds (also known as random dithering) has recently found significant utilization potential in statistical signal processing applications due to its relatively low power consumption and low implementation cost. In addition to such advantages, an attractive feature of one-bit analog-to-digital converters (ADCs) is their superior sampling rates as compared to their conventional multi-bit counterparts. This characteristic endows one-bit signal processing frameworks with what one may refer to as sample abundance. We show that sample abundance plays a pivotal role in many signal recovery and optimization problems that are formulated as (possibly non-convex) quadratic programs with linear feasibility constraints. Of particular interest to our work are low-rank matrix recovery and compressed sensing applications that take advantage of one-bit quantization. We demonstrate that the sample abundance paradigm allows for the transformation of such problems to merely linear feasibility problems by forming large-scale overdetermined linear systems -- thus removing the need for handling costly optimization constraints and objectives. To make the proposed computational cost savings achievable, we offer enhanced randomized Kaczmarz algorithms to solve these highly overdetermined feasibility problems and provide theoretical guarantees in terms of their convergence, sample size requirements, and overall performance. Several numerical results are presented to illustrate the effectiveness of the proposed methodologies.Comment: arXiv admin note: text overlap with arXiv:2301.0346

    Composite Minimization: Proximity Algorithms and Their Applications

    Get PDF
    ABSTRACT Image and signal processing problems of practical importance, such as incomplete data recovery and compressed sensing, are often modeled as nonsmooth optimization problems whose objective functions are the sum of two terms, each of which is the composition of a prox-friendly function with a matrix. Therefore, there is a practical need to solve such optimization problems. Besides the nondifferentiability of the objective functions of the associated optimization problems and the larger dimension of the underlying images and signals, the sum of the objective functions is not, in general, prox-friendly, which makes solving the problems challenging. Many algorithms have been proposed in literature to attack these problems by making use of the prox-friendly functions in the problems. However, the efficiency of these algorithms relies heavily on the underlying structures of the matrices, particularly for large scale optimization problems. In this dissertation, we propose a novel algorithmic framework that exploits the availability of the prox-friendly functions, without requiring any structural information of the matrices. This makes our algorithms suitable for large scale optimization problems of interest. We also prove the convergence of the developed algorithms. This dissertation has three main parts. In part 1, we consider the minimization of functions that are the sum of the compositions of prox-friendly functions with matrices. We characterize the solutions to the associated optimization problems as the solutions of fixed point equations that are formulated in terms of the proximity operators of the dual of the prox-friendly functions. By making use of the flexibility provided by this characterization, we develop a block Gauss-Seidel iterative scheme for finding a solution to the optimization problem and prove its convergence. We discuss the connection of our developed algorithms with some existing ones and point out the advantages of our proposed scheme. In part 2, we give a comprehensive study on the computation of the proximity operator of the ℓp-norm with 0 ≤ p \u3c 1. Nonconvexity and non-smoothness have been recognized as important features of many optimization problems in image and signal processing. The nonconvex, nonsmooth ℓp-regularization has been recognized as an efficient tool to identify the sparsity of wavelet coefficients of an image or signal under investigation. To solve an ℓp-regularized optimization problem, the proximity operator of the ℓp-norm needs to be computed in an accurate and computationally efficient way. We first study the general properties of the proximity operator of the ℓp-norm. Then, we derive the explicit form of the proximity operators of the ℓp-norm for p ∈ {0, 1/2, 2/3, 1}. Using these explicit forms and the properties of the proximity operator of the ℓp-norm, we develop an efficient algorithm to compute the proximity operator of the ℓp-norm for any p between 0 and 1. In part 3, the usefulness of the research results developed in the previous two parts is demonstrated in two types of applications, namely, image restoration and compressed sensing. A comparison with the results from some existing algorithms is also presented. For image restoration, the results developed in part 1 are applied to solve the ℓ2-TV and ℓ1-TV models. The resulting restored images have higher peak signal-to-noise ratios and the developed algorithms require less CPU time than state-of-the-art algorithms. In addition, for compressed sensing applications, our algorithm has smaller ℓ2- and ℓ∞-errors and shorter computation times than state-ofthe- art algorithms. For compressed sensing with the ℓp-regularization, our numerical simulations show smaller ℓ2- and ℓ∞-errors than that from the ℓ0-regularization and ℓ1-regularization. In summary, our numerical simulations indicate that not only can our developed algorithms be applied to a wide variety of important optimization problems, but also they are more accurate and computationally efficient than stateof- the-art algorithms

    Sparse Signal Inversion with Impulsive Noise by Dual Spectral Projected Gradient Method

    Get PDF
    We consider sparse signal inversion with impulsive noise. There are three major ingredients. The first is regularizing properties; we discuss convergence rate of regularized solutions. The second is devoted to the numerical solutions. It is challenging due to the fact that both fidelity and regularization term lack differentiability. Moreover, for ill-conditioned problems, sparsity regularization is often unstable. We propose a novel dual spectral projected gradient (DSPG) method which combines the dual problem of multiparameter regularization with spectral projection gradient method to solve the nonsmooth l1+l1 optimization functional. We show that one can overcome the nondifferentiability and instability by adding a smooth l2 regularization term to the original optimization functional. The advantage of the proposed functional is that its convex duality reduced to a constraint smooth functional. Moreover, it is stable even for ill-conditioned problems. Spectral projected gradient algorithm is used to compute the minimizers and we prove the convergence. The third is numerical simulation. Some experiments are performed, using compressed sensing and image inpainting, to demonstrate the efficiency of the proposed approach
    • …
    corecore