362 research outputs found

    Computational Methods for Sparse Solution of Linear Inverse Problems

    Get PDF
    The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications

    Optimal Rates of Convergence for Noisy Sparse Phase Retrieval via Thresholded Wirtinger Flow

    Get PDF
    This paper considers the noisy sparse phase retrieval problem: recovering a sparse signal x∈Rpx \in \mathbb{R}^p from noisy quadratic measurements yj=(ajβ€²x)2+Ο΅jy_j = (a_j' x )^2 + \epsilon_j, j=1,…,mj=1, \ldots, m, with independent sub-exponential noise Ο΅j\epsilon_j. The goals are to understand the effect of the sparsity of xx on the estimation precision and to construct a computationally feasible estimator to achieve the optimal rates. Inspired by the Wirtinger Flow [12] proposed for noiseless and non-sparse phase retrieval, a novel thresholded gradient descent algorithm is proposed and it is shown to adaptively achieve the minimax optimal rates of convergence over a wide range of sparsity levels when the aja_j's are independent standard Gaussian random vectors, provided that the sample size is sufficiently large compared to the sparsity of xx.Comment: 28 pages, 4 figure

    Coordinate descent algorithms for lasso penalized regression

    Full text link
    Imposition of a lasso penalty shrinks parameter estimates toward zero and performs continuous model selection. Lasso penalized regression is capable of handling linear regression problems where the number of predictors far exceeds the number of cases. This paper tests two exceptionally fast algorithms for estimating regression coefficients with a lasso penalty. The previously known β„“2\ell_2 algorithm is based on cyclic coordinate descent. Our new β„“1\ell_1 algorithm is based on greedy coordinate descent and Edgeworth's algorithm for ordinary β„“1\ell_1 regression. Each algorithm relies on a tuning constant that can be chosen by cross-validation. In some regression problems it is natural to group parameters and penalize parameters group by group rather than separately. If the group penalty is proportional to the Euclidean norm of the parameters of the group, then it is possible to majorize the norm and reduce parameter estimation to β„“2\ell_2 regression with a lasso penalty. Thus, the existing algorithm can be extended to novel settings. Each of the algorithms discussed is tested via either simulated or real data or both. The Appendix proves that a greedy form of the β„“2\ell_2 algorithm converges to the minimum value of the objective function.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS147 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore