142 research outputs found

    A fast approach for overcomplete sparse decomposition based on smoothed L0 norm

    Full text link
    In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined Sparse Component Analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm tries to directly minimize the L0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes, see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing erroneousl

    A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

    Full text link
    Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) methods have recently attracted much interest since they are fast, highly flexible, and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the data fidelity term corresponds to a least squares criterion and the cost function is replaced by a sequence of stochastic approximations of it. In this context, we propose an online version of an MM subspace algorithm and we study its convergence by using suitable probabilistic tools. Simulation results illustrate the good practical performance of the proposed algorithm associated with a memory gradient subspace, when applied to both non-adaptive and adaptive filter identification problems

    Nonlinear regularization techniques for seismic tomography

    Full text link
    The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, â„“2\ell_2 penalties are compared to so-called sparsity promoting â„“1\ell_1 and â„“0\ell_0 penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an â„“2\ell_2 norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer â„“1\ell_1 damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple â„“2\ell_2 minimization (`Tikhonov regularization') which should be avoided. In some of our examples, the â„“0\ell_0 method produced notable artifacts. In addition we show how nonlinear â„“1\ell_1 methods for finding sparse models can be competitive in speed with the widely used â„“2\ell_2 methods, certainly under noisy conditions, so that there is no need to shun â„“1\ell_1 penalizations.Comment: 23 pages, 7 figures. Typographical error corrected in accelerated algorithms (14) and (20

    Homotopy based algorithms for â„“0\ell_0-regularized least-squares

    Get PDF
    Sparse signal restoration is usually formulated as the minimization of a quadratic cost function ∥y−Ax∥22\|y-Ax\|_2^2, where A is a dictionary and x is an unknown sparse vector. It is well-known that imposing an ℓ0\ell_0 constraint leads to an NP-hard minimization problem. The convex relaxation approach has received considerable attention, where the ℓ0\ell_0-norm is replaced by the ℓ1\ell_1-norm. Among the many efficient ℓ1\ell_1 solvers, the homotopy algorithm minimizes ∥y−Ax∥22+λ∥x∥1\|y-Ax\|_2^2+\lambda\|x\|_1 with respect to x for a continuum of λ\lambda's. It is inspired by the piecewise regularity of the ℓ1\ell_1-regularization path, also referred to as the homotopy path. In this paper, we address the minimization problem ∥y−Ax∥22+λ∥x∥0\|y-Ax\|_2^2+\lambda\|x\|_0 for a continuum of λ\lambda's and propose two heuristic search algorithms for ℓ0\ell_0-homotopy. Continuation Single Best Replacement is a forward-backward greedy strategy extending the Single Best Replacement algorithm, previously proposed for ℓ0\ell_0-minimization at a given λ\lambda. The adaptive search of the λ\lambda-values is inspired by ℓ1\ell_1-homotopy. ℓ0\ell_0 Regularization Path Descent is a more complex algorithm exploiting the structural properties of the ℓ0\ell_0-regularization path, which is piecewise constant with respect to λ\lambda. Both algorithms are empirically evaluated for difficult inverse problems involving ill-conditioned dictionaries. Finally, we show that they can be easily coupled with usual methods of model order selection.Comment: 38 page
    • …
    corecore