3,218 research outputs found

    One-bit compressive sensing with norm estimation

    Full text link
    Consider the recovery of an unknown signal x{x} from quantized linear measurements. In the one-bit compressive sensing setting, one typically assumes that x{x} is sparse, and that the measurements are of the form sign(ai,x){±1}\operatorname{sign}(\langle {a}_i, {x} \rangle) \in \{\pm1\}. Since such measurements give no information on the norm of x{x}, recovery methods from such measurements typically assume that x2=1\| {x} \|_2=1. We show that if one allows more generally for quantized affine measurements of the form sign(ai,x+bi)\operatorname{sign}(\langle {a}_i, {x} \rangle + b_i), and if the vectors ai{a}_i are random, an appropriate choice of the affine shifts bib_i allows norm recovery to be easily incorporated into existing methods for one-bit compressive sensing. Additionally, we show that for arbitrary fixed x{x} in the annulus rx2Rr \leq \| {x} \|_2 \leq R, one may estimate the norm x2\| {x} \|_2 up to additive error δ\delta from mR4r2δ2m \gtrsim R^4 r^{-2} \delta^{-2} such binary measurements through a single evaluation of the inverse Gaussian error function. Finally, all of our recovery guarantees can be made universal over sparse vectors, in the sense that with high probability, one set of measurements and thresholds can successfully estimate all sparse vectors x{x} within a Euclidean ball of known radius.Comment: 20 pages, 2 figure

    A Path Algorithm for Constrained Estimation

    Full text link
    Many least squares problems involve affine equality and inequality constraints. Although there are variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current paper proposes a new path following algorithm for quadratic programming based on exact penalization. Similar penalties arise in l1l_1 regularization in model selection. Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to \infty, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the lasso and generalized lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well chosen examples illustrate the mechanics and potential of path following.Comment: 26 pages, 5 figure
    corecore