1,858 research outputs found
Group-Sparse Signal Denoising: Non-Convex Regularization, Convex Optimization
Convex optimization with sparsity-promoting convex regularization is a
standard approach for estimating sparse signals in noise. In order to promote
sparsity more strongly than convex regularization, it is also standard practice
to employ non-convex optimization. In this paper, we take a third approach. We
utilize a non-convex regularization term chosen such that the total cost
function (consisting of data consistency and regularization terms) is convex.
Therefore, sparsity is more strongly promoted than in the standard convex
formulation, but without sacrificing the attractive aspects of convex
optimization (unique minimum, robust algorithms, etc.). We use this idea to
improve the recently developed 'overlapping group shrinkage' (OGS) algorithm
for the denoising of group-sparse signals. The algorithm is applied to the
problem of speech enhancement with favorable results in terms of both SNR and
perceptual quality.Comment: 14 pages, 11 figure
A Method for Finding Structured Sparse Solutions to Non-negative Least Squares Problems with Applications
Demixing problems in many areas such as hyperspectral imaging and
differential optical absorption spectroscopy (DOAS) often require finding
sparse nonnegative linear combinations of dictionary elements that match
observed data. We show how aspects of these problems, such as misalignment of
DOAS references and uncertainty in hyperspectral endmembers, can be modeled by
expanding the dictionary with grouped elements and imposing a structured
sparsity assumption that the combinations within each group should be sparse or
even 1-sparse. If the dictionary is highly coherent, it is difficult to obtain
good solutions using convex or greedy methods, such as non-negative least
squares (NNLS) or orthogonal matching pursuit. We use penalties related to the
Hoyer measure, which is the ratio of the and norms, as sparsity
penalties to be added to the objective in NNLS-type models. For solving the
resulting nonconvex models, we propose a scaled gradient projection algorithm
that requires solving a sequence of strongly convex quadratic programs. We
discuss its close connections to convex splitting methods and difference of
convex programming. We also present promising numerical results for example
DOAS analysis and hyperspectral demixing problems.Comment: 38 pages, 14 figure
Sharp Oracle Inequalities for Square Root Regularization
We study a set of regularization methods for high-dimensional linear
regression models. These penalized estimators have the square root of the
residual sum of squared errors as loss function, and any weakly decomposable
norm as penalty function. This fit measure is chosen because of its property
that the estimator does not depend on the unknown standard deviation of the
noise. On the other hand, a generalized weakly decomposable norm penalty is
very useful in being able to deal with different underlying sparsity
structures. We can choose a different sparsity inducing norm depending on how
we want to interpret the unknown parameter vector . Structured sparsity
norms, as defined in Micchelli et al. [18], are special cases of weakly
decomposable norms, therefore we also include the square root LASSO (Belloni et
al. [3]), the group square root LASSO (Bunea et al. [10]) and a new method
called the square root SLOPE (in a similar fashion to the SLOPE from Bogdan et
al. [6]). For this collection of estimators our results provide sharp oracle
inequalities with the Karush-Kuhn-Tucker conditions. We discuss some examples
of estimators. Based on a simulation we illustrate some advantages of the
square root SLOPE
- …