756 research outputs found

    Templates for Convex Cone Problems with Applications to Sparse Signal Recovery

    Full text link
    This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This version has updated reference

    Discussion: The Dantzig selector: Statistical estimation when pp is much larger than nn

    Full text link
    Discussion of "The Dantzig selector: Statistical estimation when pp is much larger than nn" [math/0506081]Comment: Published in at http://dx.doi.org/10.1214/009053607000000424 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The Dantzig selector: Statistical estimation when pp is much larger than nn

    Get PDF
    In many important statistical applications, the number of variables or parameters pp is much larger than the number of observations nn. Suppose then that we have observations y=Xβ+zy=X\beta+z, where βRp\beta\in\mathbf{R}^p is a parameter vector of interest, XX is a data matrix with possibly far fewer rows than columns, npn\ll p, and the ziz_i's are i.i.d. N(0,σ2)N(0,\sigma^2). Is it possible to estimate β\beta reliably based on the noisy data yy? To estimate β\beta, we introduce a new estimator--we call it the Dantzig selector--which is a solution to the 1\ell_1-regularization problem \min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where rr is the residual vector yXβ~y-X\tilde{\beta} and tt is a positive scalar. We show that if XX obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector β\beta is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, β^β22C22logp(σ2+imin(βi2,σ2)).\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr). Our results are nonasymptotic and we give values for the constant CC. Even though nn may be much smaller than pp, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126], [arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135]. Rejoinder in [arXiv:0803.3136]. Published in at http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore