7 research outputs found

    Computing one-bit compressive sensing via double-sparsity constrained optimization

    Get PDF
    One-bit compressive sensing is popular in signal processing and communications due to the advantage of its low storage costs and hardware complexity. However, it has been a challenging task all along since only the one-bit (the sign) information is available to recover the signal. In this paper, we appropriately formulate the one-bit compressed sensing by a double-sparsity constrained optimization problem. The first-order optimality conditions via the newly introduced τ-stationarity for this nonconvex and discontinuous problem are established, based on which, a gradient projection subspace pursuit (GPSP) approach with global convergence and fast convergence rate is proposed. Numerical experiments against other leading solvers illustrate the high efficiency of our proposed algorithm in terms of the computation time and the quality of the signal recovery as well

    Robust decoding from 1-bit compressive sampling with ordinary and regularized least squares

    No full text
    202305 bcchVersion of RecordSelf-fundedPublishe

    Quadratic convergence of smoothing Newton's method for 0/1 loss optimization

    Get PDF
    It has been widely recognized that the 0/1-loss function is one of the most natural choices for modelling classification errors, and it has a wide range of applications including support vector machines and 1-bit compressed sensing. Due to the combinatorial nature of the 0/1 loss function, methods based on convex relaxations or smoothing approximations have dominated the existing research and are often able to provide approximate solutions of good quality. However, those methods are not optimizing the 0/1 loss function directly and hence no optimality has been established for the original problem. This paper aims to study the optimality conditions of the 0/1 function minimization and for the first time to develop Newton's method that directly optimizes the 0/1 function with a local quadratic convergence under reasonable conditions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods
    corecore