32 research outputs found

    Exponential decay of reconstruction error from binary measurements of sparse signals

    Get PDF
    Binary measurements arise naturally in a variety of statistical and engineering applications. They may be inherent to the problem—e.g., in determining the relationship between genetics and the presence or absence of a disease—or they may be a result of extreme quantization. A recent influx of literature has suggested that using prior signal information can greatly improve the ability to reconstruct a signal from binary measurements. This is exemplified by onebit compressed sensing, which takes the compressed sensing model but assumes that only the sign of each measurement is retained. It has recently been shown that the number of one-bit measurements required for signal estimation mirrors that of unquantized compressed sensing. Indeed, s-sparse signals in R n can be estimated (up to normalization) from Ω(s log(n/s)) one-bit measurements. Nevertheless, controlling the precise accuracy of the error estimate remains an open challenge. In this paper, we focus on optimizing the decay of the error as a function of the oversampling factor λ := m/(s log(n/s)), where m is the number of measurements. It is known that the error in reconstructing sparse signals from standard one-bit measurements is bounded below by Ω(λ −1 ). Without adjusting the measurement procedure, reducing this polynomial error decay rate is impossible. However, we show that an adaptive choice of the thresholds used for quantization may lower the error rate to e −Ω(λ) . This improves upon guarantees for other methods of adaptive thresholding as proposed in Sigma-Delta quantization. We develop a general recursive strategy to achieve this exponential decay and two specific polynomialtime algorithms which fall into this framework, one based on convex programming and one on hard thresholding. This work is inspired by the one-bit compressed sensing model, in which the engineer controls the measurement procedure. Nevertheless, the principle is extendable to signal reconstruction problems in a variety of binary statistical models as well as statistical estimation problems like logistic regression

    Feedback Acquisition and Reconstruction of Spectrum-Sparse Signals by Predictive Level Comparisons

    Full text link
    In this letter, we propose a sparsity promoting feedback acquisition and reconstruction scheme for sensing, encoding and subsequent reconstruction of spectrally sparse signals. In the proposed scheme, the spectral components are estimated utilizing a sparsity-promoting, sliding-window algorithm in a feedback loop. Utilizing the estimated spectral components, a level signal is predicted and sign measurements of the prediction error are acquired. The sparsity promoting algorithm can then estimate the spectral components iteratively from the sign measurements. Unlike many batch-based Compressive Sensing (CS) algorithms, our proposed algorithm gradually estimates and follows slow changes in the sparse components utilizing a sliding-window technique. We also consider the scenario in which possible flipping errors in the sign bits propagate along iterations (due to the feedback loop) during reconstruction. We propose an iterative error correction algorithm to cope with this error propagation phenomenon considering a binary-sparse occurrence model on the error sequence. Simulation results show effective performance of the proposed scheme in comparison with the literature

    One-Bit ExpanderSketch for One-Bit Compressed Sensing

    Full text link
    Is it possible to obliviously construct a set of hyperplanes H such that you can approximate a unit vector x when you are given the side on which the vector lies with respect to every h in H? In the sparse recovery literature, where x is approximately k-sparse, this problem is called one-bit compressed sensing and has received a fair amount of attention the last decade. In this paper we obtain the first scheme that achieves almost optimal measurements and sublinear decoding time for one-bit compressed sensing in the non-uniform case. For a large range of parameters, we improve the state of the art in both the number of measurements and the decoding time

    Mean Estimation from Adaptive One-bit Measurements

    Full text link
    We consider the problem of estimating the mean of a normal distribution under the following constraint: the estimator can access only a single bit from each sample from this distribution. We study the squared error risk in this estimation as a function of the number of samples and one-bit measurements nn. We consider an adaptive estimation setting where the single-bit sent at step nn is a function of both the new sample and the previous n−1n-1 acquired bits. For this setting, we show that no estimator can attain asymptotic mean squared error smaller than π/(2n)+O(n−2)\pi/(2n)+O(n^{-2}) times the variance. In other words, one-bit restriction increases the number of samples required for a prescribed accuracy of estimation by a factor of at least π/2\pi/2 compared to the unrestricted case. In addition, we provide an explicit estimator that attains this asymptotic error, showing that, rather surprisingly, only π/2\pi/2 times more samples are required in order to attain estimation performance equivalent to the unrestricted case
    corecore