8,444 research outputs found

    Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition

    Full text link
    Hyperspectral images (HSIs) are often corrupted by a mixture of several types of noise during the acquisition process, e.g., Gaussian noise, impulse noise, dead lines, stripes, and many others. Such complex noise could degrade the quality of the acquired HSIs, limiting the precision of the subsequent processing. In this paper, we present a novel tensor-based HSI restoration approach by fully identifying the intrinsic structures of the clean HSI part and the mixed noise part respectively. Specifically, for the clean HSI part, we use tensor Tucker decomposition to describe the global correlation among all bands, and an anisotropic spatial-spectral total variation (SSTV) regularization to characterize the piecewise smooth structure in both spatial and spectral domains. For the mixed noise part, we adopt the 1\ell_1 norm regularization to detect the sparse noise, including stripes, impulse noise, and dead pixels. Despite that TV regulariztion has the ability of removing Gaussian noise, the Frobenius norm term is further used to model heavy Gaussian noise for some real-world scenarios. Then, we develop an efficient algorithm for solving the resulting optimization problem by using the augmented Lagrange multiplier (ALM) method. Finally, extensive experiments on simulated and real-world noise HSIs are carried out to demonstrate the superiority of the proposed method over the existing state-of-the-art ones.Comment: 15 pages, 20 figure

    A Compact Formulation for the 2,1\ell_{2,1} Mixed-Norm Minimization Problem

    Full text link
    Parameter estimation from multiple measurement vectors (MMVs) is a fundamental problem in many signal processing applications, e.g., spectral analysis and direction-of- arrival estimation. Recently, this problem has been address using prior information in form of a jointly sparse signal structure. A prominent approach for exploiting joint sparsity considers mixed-norm minimization in which, however, the problem size grows with the number of measurements and the desired resolution, respectively. In this work we derive an equivalent, compact reformulation of the 2,1\ell_{2,1} mixed-norm minimization problem which provides new insights on the relation between different existing approaches for jointly sparse signal reconstruction. The reformulation builds upon a compact parameterization, which models the row-norms of the sparse signal representation as parameters of interest, resulting in a significant reduction of the MMV problem size. Given the sparse vector of row-norms, the jointly sparse signal can be computed from the MMVs in closed form. For the special case of uniform linear sampling, we present an extension of the compact formulation for gridless parameter estimation by means of semidefinite programming. Furthermore, we derive in this case from our compact problem formulation the exact equivalence between the 2,1\ell_{2,1} mixed-norm minimization and the atomic-norm minimization. Additionally, for the case of irregular sampling or a large number of samples, we present a low complexity, grid-based implementation based on the coordinate descent method

    Convexity in source separation: Models, geometry, and algorithms

    Get PDF
    Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work

    Generalized Forward-Backward Splitting

    Full text link
    This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F+i=1nGiF + \sum_{i=1}^n G_i, where FF has a Lipschitz-continuous gradient and the GiG_i's are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than n=1n = 1 non-smooth function, our method generalizes it to the case of arbitrary nn. Our method makes an explicit use of the regularity of FF in the forward step, and the proximity operators of the GiG_i's are applied in parallel in the backward step. This allows the generalized forward backward to efficiently address an important class of convex problems. We prove its convergence in infinite dimension, and its robustness to errors on the computation of the proximity operators and of the gradient of FF. Examples on inverse problems in imaging demonstrate the advantage of the proposed methods in comparison to other splitting algorithms.Comment: 24 pages, 4 figure

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore