3,218 research outputs found
One-bit compressive sensing with norm estimation
Consider the recovery of an unknown signal from quantized linear
measurements. In the one-bit compressive sensing setting, one typically assumes
that is sparse, and that the measurements are of the form
. Since such
measurements give no information on the norm of , recovery methods from
such measurements typically assume that . We show that if one
allows more generally for quantized affine measurements of the form
, and if the vectors
are random, an appropriate choice of the affine shifts allows
norm recovery to be easily incorporated into existing methods for one-bit
compressive sensing. Additionally, we show that for arbitrary fixed in
the annulus , one may estimate the norm up to additive error from
such binary measurements through a single evaluation of the inverse Gaussian
error function. Finally, all of our recovery guarantees can be made universal
over sparse vectors, in the sense that with high probability, one set of
measurements and thresholds can successfully estimate all sparse vectors
within a Euclidean ball of known radius.Comment: 20 pages, 2 figure
A Path Algorithm for Constrained Estimation
Many least squares problems involve affine equality and inequality
constraints. Although there are variety of methods for solving such problems,
most statisticians find constrained estimation challenging. The current paper
proposes a new path following algorithm for quadratic programming based on
exact penalization. Similar penalties arise in regularization in model
selection. Classical penalty methods solve a sequence of unconstrained problems
that put greater and greater stress on meeting the constraints. In the limit as
the penalty constant tends to , one recovers the constrained solution.
In the exact penalty method, squared penalties are replaced by absolute value
penalties, and the solution is recovered for a finite value of the penalty
constant. The exact path following method starts at the unconstrained solution
and follows the solution path as the penalty constant increases. In the
process, the solution path hits, slides along, and exits from the various
constraints. Path following in lasso penalized regression, in contrast, starts
with a large value of the penalty constant and works its way downward. In both
settings, inspection of the entire solution path is revealing. Just as with the
lasso and generalized lasso, it is possible to plot the effective degrees of
freedom along the solution path. For a strictly convex quadratic program, the
exact penalty algorithm can be framed entirely in terms of the sweep operator
of regression analysis. A few well chosen examples illustrate the mechanics and
potential of path following.Comment: 26 pages, 5 figure
- …