756 research outputs found
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
This paper develops a general framework for solving a variety of convex cone
problems that frequently arise in signal processing, machine learning,
statistics, and other fields. The approach works as follows: first, determine a
conic formulation of the problem; second, determine its dual; third, apply
smoothing; and fourth, solve using an optimal first-order method. A merit of
this approach is its flexibility: for example, all compressed sensing problems
can be solved via this approach. These include models with objective
functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or
a combination thereof. In addition, the paper also introduces a number of
technical contributions such as a novel continuation scheme, a novel approach
for controlling the step size, and some new results showing that the smooth and
unsmoothed problems are sometimes formally equivalent. Combined with our
framework, these lead to novel, stable and computationally efficient
algorithms. For instance, our general implementation is competitive with
state-of-the-art methods for solving intensively studied problems such as the
LASSO. Further, numerical experiments show that one can solve the Dantzig
selector problem, for which no efficient large-scale solvers exist, in a few
hundred iterations. Finally, the paper is accompanied with a software release.
This software is not a single, monolithic solver; rather, it is a suite of
programs and routines designed to serve as building blocks for constructing
complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This
version has updated reference
Discussion: The Dantzig selector: Statistical estimation when is much larger than
Discussion of "The Dantzig selector: Statistical estimation when is much
larger than " [math/0506081]Comment: Published in at http://dx.doi.org/10.1214/009053607000000424 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
The Dantzig selector: Statistical estimation when is much larger than
In many important statistical applications, the number of variables or
parameters is much larger than the number of observations . Suppose then
that we have observations , where is a
parameter vector of interest, is a data matrix with possibly far fewer rows
than columns, , and the 's are i.i.d. . Is it
possible to estimate reliably based on the noisy data ? To estimate
, we introduce a new estimator--we call it the Dantzig selector--which
is a solution to the -regularization problem \min_{\tilde{\b
eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad
\|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where is
the residual vector and is a positive scalar. We show
that if obeys a uniform uncertainty principle (with unit-normed columns)
and if the true parameter vector is sufficiently sparse (which here
roughly guarantees that the model is identifiable), then with very large
probability, Our results are
nonasymptotic and we give values for the constant . Even though may be
much smaller than , our estimator achieves a loss within a logarithmic
factor of the ideal mean squared error one would achieve with an oracle which
would supply perfect information about which coordinates are nonzero, and which
were above the noise level. In multivariate regression and from a model
selection viewpoint, our result says that it is possible nearly to select the
best subset of variables by solving a very simple convex program, which, in
fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126],
[arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135].
Rejoinder in [arXiv:0803.3136]. Published in at
http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics
(http://www.imstat.org/aos/) by the Institute of Mathematical Statistics
(http://www.imstat.org
- …