142 research outputs found
A fast approach for overcomplete sparse decomposition based on smoothed L0 norm
In this paper, a fast algorithm for overcomplete sparse decomposition, called
SL0, is proposed. The algorithm is essentially a method for obtaining sparse
solutions of underdetermined systems of linear equations, and its applications
include underdetermined Sparse Component Analysis (SCA), atomic decomposition
on overcomplete dictionaries, compressed sensing, and decoding real field
codes. Contrary to previous methods, which usually solve this problem by
minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm
tries to directly minimize the L0 norm. It is experimentally shown that the
proposed algorithm is about two to three orders of magnitude faster than the
state-of-the-art interior-point LP solvers, while providing the same (or
better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes,
see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing
erroneousl
A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation
Stochastic approximation techniques play an important role in solving many
problems encountered in machine learning or adaptive signal processing. In
these contexts, the statistics of the data are often unknown a priori or their
direct computation is too intensive, and they have thus to be estimated online
from the observed signals. For batch optimization of an objective function
being the sum of a data fidelity term and a penalization (e.g. a sparsity
promoting function), Majorize-Minimize (MM) methods have recently attracted
much interest since they are fast, highly flexible, and effective in ensuring
convergence. The goal of this paper is to show how these methods can be
successfully extended to the case when the data fidelity term corresponds to a
least squares criterion and the cost function is replaced by a sequence of
stochastic approximations of it. In this context, we propose an online version
of an MM subspace algorithm and we study its convergence by using suitable
probabilistic tools. Simulation results illustrate the good practical
performance of the proposed algorithm associated with a memory gradient
subspace, when applied to both non-adaptive and adaptive filter identification
problems
Nonlinear regularization techniques for seismic tomography
The effects of several nonlinear regularization techniques are discussed in
the framework of 3D seismic tomography. Traditional, linear, penalties
are compared to so-called sparsity promoting and penalties,
and a total variation penalty. Which of these algorithms is judged optimal
depends on the specific requirements of the scientific experiment. If the
correct reproduction of model amplitudes is important, classical damping
towards a smooth model using an norm works almost as well as
minimizing the total variation but is much more efficient. If gradients (edges
of anomalies) should be resolved with a minimum of distortion, we prefer
damping of Daubechies-4 wavelet coefficients. It has the additional
advantage of yielding a noiseless reconstruction, contrary to simple
minimization (`Tikhonov regularization') which should be avoided. In some of
our examples, the method produced notable artifacts. In addition we
show how nonlinear methods for finding sparse models can be
competitive in speed with the widely used methods, certainly under
noisy conditions, so that there is no need to shun penalizations.Comment: 23 pages, 7 figures. Typographical error corrected in accelerated
algorithms (14) and (20
Homotopy based algorithms for -regularized least-squares
Sparse signal restoration is usually formulated as the minimization of a
quadratic cost function , where A is a dictionary and x is an
unknown sparse vector. It is well-known that imposing an constraint
leads to an NP-hard minimization problem. The convex relaxation approach has
received considerable attention, where the -norm is replaced by the
-norm. Among the many efficient solvers, the homotopy
algorithm minimizes with respect to x for a
continuum of 's. It is inspired by the piecewise regularity of the
-regularization path, also referred to as the homotopy path. In this
paper, we address the minimization problem for a
continuum of 's and propose two heuristic search algorithms for
-homotopy. Continuation Single Best Replacement is a forward-backward
greedy strategy extending the Single Best Replacement algorithm, previously
proposed for -minimization at a given . The adaptive search of
the -values is inspired by -homotopy. Regularization
Path Descent is a more complex algorithm exploiting the structural properties
of the -regularization path, which is piecewise constant with respect
to . Both algorithms are empirically evaluated for difficult inverse
problems involving ill-conditioned dictionaries. Finally, we show that they can
be easily coupled with usual methods of model order selection.Comment: 38 page
- …