871 research outputs found
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
This paper develops a general framework for solving a variety of convex cone
problems that frequently arise in signal processing, machine learning,
statistics, and other fields. The approach works as follows: first, determine a
conic formulation of the problem; second, determine its dual; third, apply
smoothing; and fourth, solve using an optimal first-order method. A merit of
this approach is its flexibility: for example, all compressed sensing problems
can be solved via this approach. These include models with objective
functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or
a combination thereof. In addition, the paper also introduces a number of
technical contributions such as a novel continuation scheme, a novel approach
for controlling the step size, and some new results showing that the smooth and
unsmoothed problems are sometimes formally equivalent. Combined with our
framework, these lead to novel, stable and computationally efficient
algorithms. For instance, our general implementation is competitive with
state-of-the-art methods for solving intensively studied problems such as the
LASSO. Further, numerical experiments show that one can solve the Dantzig
selector problem, for which no efficient large-scale solvers exist, in a few
hundred iterations. Finally, the paper is accompanied with a software release.
This software is not a single, monolithic solver; rather, it is a suite of
programs and routines designed to serve as building blocks for constructing
complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This
version has updated reference
The Discrete Dantzig Selector: Estimating Sparse Linear Models via Mixed Integer Linear Optimization
We propose a novel high-dimensional linear regression estimator: the Discrete
Dantzig Selector, which minimizes the number of nonzero regression coefficients
subject to a budget on the maximal absolute correlation between the features
and residuals. Motivated by the significant advances in integer optimization
over the past 10-15 years, we present a Mixed Integer Linear Optimization
(MILO) approach to obtain certifiably optimal global solutions to this
nonconvex optimization problem. The current state of algorithmics in integer
optimization makes our proposal substantially more computationally attractive
than the least squares subset selection framework based on integer quadratic
optimization, recently proposed in [8] and the continuous nonconvex quadratic
optimization framework of [33]. We propose new discrete first-order methods,
which when paired with state-of-the-art MILO solvers, lead to good solutions
for the Discrete Dantzig Selector problem for a given computational budget. We
illustrate that our integrated approach provides globally optimal solutions in
significantly shorter computation times, when compared to off-the-shelf MILO
solvers. We demonstrate both theoretically and empirically that in a wide range
of regimes the statistical properties of the Discrete Dantzig Selector are
superior to those of popular -based approaches. We illustrate that
our approach can handle problem instances with p = 10,000 features with
certifiable optimality making it a highly scalable combinatorial variable
selection approach in sparse linear modeling
Performance Analysis of Sparse Recovery Based on Constrained Minimal Singular Values
The stability of sparse signal reconstruction is investigated in this paper.
We design efficient algorithms to verify the sufficient condition for unique
sparse recovery. One of our algorithm produces comparable results with
the state-of-the-art technique and performs orders of magnitude faster. We show
that the -constrained minimal singular value (-CMSV) of the
measurement matrix determines, in a very concise manner, the recovery
performance of -based algorithms such as the Basis Pursuit, the Dantzig
selector, and the LASSO estimator. Compared with performance analysis involving
the Restricted Isometry Constant, the arguments in this paper are much less
complicated and provide more intuition on the stability of sparse signal
recovery. We show also that, with high probability, the subgaussian ensemble
generates measurement matrices with -CMSVs bounded away from zero, as
long as the number of measurements is relatively large. To compute the
-CMSV and its lower bound, we design two algorithms based on the
interior point algorithm and the semi-definite relaxation
- …