362 research outputs found
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
Optimal Rates of Convergence for Noisy Sparse Phase Retrieval via Thresholded Wirtinger Flow
This paper considers the noisy sparse phase retrieval problem: recovering a
sparse signal from noisy quadratic measurements , , with independent sub-exponential
noise . The goals are to understand the effect of the sparsity of
on the estimation precision and to construct a computationally feasible
estimator to achieve the optimal rates. Inspired by the Wirtinger Flow [12]
proposed for noiseless and non-sparse phase retrieval, a novel thresholded
gradient descent algorithm is proposed and it is shown to adaptively achieve
the minimax optimal rates of convergence over a wide range of sparsity levels
when the 's are independent standard Gaussian random vectors, provided
that the sample size is sufficiently large compared to the sparsity of .Comment: 28 pages, 4 figure
Coordinate descent algorithms for lasso penalized regression
Imposition of a lasso penalty shrinks parameter estimates toward zero and
performs continuous model selection. Lasso penalized regression is capable of
handling linear regression problems where the number of predictors far exceeds
the number of cases. This paper tests two exceptionally fast algorithms for
estimating regression coefficients with a lasso penalty. The previously known
algorithm is based on cyclic coordinate descent. Our new
algorithm is based on greedy coordinate descent and Edgeworth's algorithm for
ordinary regression. Each algorithm relies on a tuning constant that
can be chosen by cross-validation. In some regression problems it is natural to
group parameters and penalize parameters group by group rather than separately.
If the group penalty is proportional to the Euclidean norm of the parameters of
the group, then it is possible to majorize the norm and reduce parameter
estimation to regression with a lasso penalty. Thus, the existing
algorithm can be extended to novel settings. Each of the algorithms discussed
is tested via either simulated or real data or both. The Appendix proves that a
greedy form of the algorithm converges to the minimum value of the
objective function.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS147 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- β¦