68 research outputs found
Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation
In this paper, we aim at recovering an unknown signal x0 from noisy
L1measurements y=Phi*x0+w, where Phi is an ill-conditioned or singular linear
operator and w accounts for some noise. To regularize such an ill-posed inverse
problem, we impose an analysis sparsity prior. More precisely, the recovery is
cast as a convex optimization program where the objective is the sum of a
quadratic data fidelity term and a regularization term formed of the L1-norm of
the correlations between the sought after signal and atoms in a given
(generally overcomplete) dictionary. The L1-sparsity analysis prior is weighted
by a regularization parameter lambda>0. In this paper, we prove that any
minimizers of this problem is a piecewise-affine function of the observations y
and the regularization parameter lambda. As a byproduct, we exploit these
properties to get an objectively guided choice of lambda. In particular, we
develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE)
and show that it is an unbiased and reliable estimator of an appropriately
defined risk. The latter encompasses special cases such as the prediction risk,
the projection risk and the estimation risk. We apply these risk estimators to
the special case of L1-sparsity analysis regularization. We also discuss
implementation issues and propose fast algorithms to solve the L1 analysis
minimization problem and to compute the associated GSURE. We finally illustrate
the applicability of our framework to parameter(s) selection on several imaging
problems
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Sparsity-Homotopy Perturbation Inversion Method with Wavelets and Applications to Black-Scholes Model and Todaro Model
Sparsity regularization method plays an important role in reconstructing parameters. Compared with traditional regularization methods, sparsity regularization method has the ability to obtain better performance for reconstructing sparse parameters. However, sparsity regularization method does not have the ability to reconstruct smooth parameters. For overcoming this difficulty, we combine a sparsity regularization method with a wavelet method in order to transform smooth parameters into sparse parameters. We use a sparsity-homotopy perturbation inversion method to improve the accuracy and stability and apply the proposed method to reconstruct parameters for a Black-Scholes option pricing model and a Todaro model. Numerical experiments show that the proposed method is convergent and stable
HPM-Based Dynamic Sparse Grid Approach for Perona-Malik Equation
The Perona-Malik equation is a famous image edge-preserved denoising model, which is represented as a nonlinear 2-dimension partial differential equation. Based on the homotopy perturbation method (HPM) and the multiscale interpolation theory, a dynamic sparse grid method for Perona-Malik was constructed in this paper. Compared with the traditional multiscale numerical techniques, the proposed method is independent of the basis function. In this method, a dynamic choice scheme of external grid points is proposed to eliminate the artifacts introduced by the partitioning technique. In order to decrease the calculation amount introduced by the change of the external grid points, the Newton interpolation technique is employed instead of the traditional Lagrange interpolation operator, and the condition number of the discretized matrix different equations is taken into account of the choice of the external grid points. Using the new numerical scheme, the time complexity of the sparse grid method for the image denoising is decreased to O(4J+2j) from O(43J), (j≪J). The experiment results show that the dynamic choice scheme of the external gird points can eliminate the boundary effect effectively and the efficiency can also be improved greatly comparing with the classical interval wavelets numerical methods
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
Blind Source Separation: the Sparsity Revolution
International audienceOver the last few years, the development of multi-channel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the wide literature on the so-called blind source separation (BSS) problem. In this context, as clearly emphasized by previous work, it is fundamental that the sources to be retrieved present some quantitatively measurable diversity. Recently, sparsity and morphological diversity have emerged as a novel and effective source of diversity for BSS. We give here some essential insights into the use of sparsity in source separation and we outline the essential role of morphological diversity as being a source of diversity or contrast between the sources. This paper overviews a sparsity-based BSS method coined Generalized Morphological Component Analysis (GMCA) that takes advantages of both morphological diversity and sparsity, using recent sparse overcomplete or redundant signal representations. GMCA is a fast and efficient blind source separation method. In remote sensing applications, the specificity of hyperspectral data should be accounted for. We extend the proposed GMCA framework to deal with hyperspectral data. In a general framework, GMCA provides a basis for multivariate data analysis in the scope of a wide range of classical multivariate data restorate. Numerical results are given in color image denoising and inpainting. Finally, GMCA is applied to the simulated ESA/Planck data. It is shown to give effective astrophysical component separation
Variational models and numerical algorithms for selective image segmentation
This thesis deals with the numerical solution of nonlinear partial differential equations and their application in image processing. The differential equations we deal with here arise from the minimization of variational models for image restoration techniques (such as denoising) and recognition of objects techniques (such as segmentation). Image denoising is a technique aimed at restoring a digital image that has been contaminated by noise while segmentation is a fundamental task in image analysis responsible for partitioning an image as sub-regions or representing the image into something that is more meaningful and easier to analyze such as extracting one or more specific objects of interest in images based on relevant information or a desired feature. Although there has been a lot of research in the restoration of images, the performance of such methods is still poor, especially when the images have a high level of noise or when the algorithms are slow. Task of the segmentation is even more challenging problem due to the difficulty of delineating, even manually, the contours of the objects of interest. The problems are often due to low contrast, fuzzy contours, similar intensities with adjacent objects, or the objects to be extracted having no real contours. The first objective of this work is to develop fast image restoration and segmentation methods which provide better denoising and fast and robust performance for image segmentation. The contribution presented here is the development of a restarted homotopy analysis method which has been designed to be easily adaptable to various types of image processing problems. As a second research objective we propose a framework for image selective segmentation which partitions an image based on the information known in advance of the object/objects to be extracted (for example the left kidney is the target to be extracted in a CT image and the prior knowledge is a few markers in this object of interest). This kind of segmentation appears especially in medical applications. Medical experts usually estimate and manually draw the boundaries of the organ/organs based on their experience. Our aim is to introduce automatic segmentation of the object of interest as a contribution not only to the way doctors and surgeons diagnose and operate but to other fields as well. The proposed methods showed success in segmenting different objects and perform well in different types of images not only in two-dimensional but in three-dimensional images as well
- …