3 research outputs found

    Binary and Multi-Bit Coding for Stable Random Projections

    Full text link
    We develop efficient binary (i.e., 1-bit) and multi-bit coding schemes for estimating the scale parameter of α\alpha-stable distributions. The work is motivated by the recent work on one scan 1-bit compressed sensing (sparse signal recovery) using α\alpha-stable random projections, which requires estimating of the scale parameter at bits-level. Our technique can be naturally applied to data stream computations for estimating the α\alpha-th frequency moment. In fact, the method applies to the general scale family of distributions, not limited to α\alpha-stable distributions. Due to the heavy-tailed nature of α\alpha-stable distributions, using traditional estimators will potentially need many bits to store each measurement in order to ensure sufficient accuracy. Interestingly, our paper demonstrates that, using a simple closed-form estimator with merely 1-bit information does not result in a significant loss of accuracy if the parameter is chosen appropriately. For example, when α=0+\alpha=0+, 1, and 2, the coefficients of the optimal estimation variances using full (i.e., infinite-bit) information are 1, 2, and 2, respectively. With the 1-bit scheme and appropriately chosen parameters, the corresponding variance coefficients are 1.544, π2/4\pi^2/4, and 3.066, respectively. Theoretical tail bounds are also provided. Using 2 or more bits per measurements reduces the estimation variance and importantly, stabilizes the estimate so that the variance is not sensitive to parameters. With look-up tables, the computational cost is minimal

    One Scan 1-Bit Compressed Sensing

    Full text link
    Based on α\alpha-stable random projections with small α\alpha, we develop a simple algorithm for compressed sensing (sparse signal recovery) by utilizing only the signs (i.e., 1-bit) of the measurements. Using only 1-bit information of the measurements results in substantial cost reduction in collection, storage, communication, and decoding for compressed sensing. The proposed algorithm is efficient in that the decoding procedure requires only one scan of the coordinates. Our analysis can precisely show that, for a KK-sparse signal of length NN, 12.3KlogN/δ12.3K\log N/\delta measurements (where δ\delta is the confidence) would be sufficient for recovering the support and the signs of the signal. While the method is very robust against typical measurement noises, we also provide the analysis of the scheme under random flipping of the signs of the measurements. \noindent Compared to the well-known work on 1-bit marginal regression (which can also be viewed as a one-scan method), the proposed algorithm requires orders of magnitude fewer measurements. Compared to 1-bit Iterative Hard Thresholding (IHT) (which is not a one-scan algorithm), our method is still significantly more accurate. Furthermore, the proposed method is reasonably robust against random sign flipping while IHT is known to be very sensitive to this type of noise

    Linear signal recovery from bb-bit-quantized linear measurements: precise analysis of the trade-off between bit depth and number of measurements

    Full text link
    We consider the problem of recovering a high-dimensional structured signal from independent Gaussian linear measurements each of which is quantized to bb bits. Our interest is in linear approaches to signal recovery, where "linear" means that non-linearity resulting from quantization is ignored and the observations are treated as if they arose from a linear measurement model. Specifically, the focus is on a generalization of a method for one-bit observations due to Plan and Vershynin [\emph{IEEE~Trans. Inform. Theory, \textbf{59} (2013), 482--494}]. At the heart of the present paper is a precise characterization of the optimal trade-off between the number of measurements mm and the bit depth per measurement bb given a total budget of B=mbB = m \cdot b bits when the goal is to minimize the 2\ell_2-error in estimating the signal. It turns out that the choice b=1b = 1 is optimal for estimating the unit vector (direction) corresponding to the signal for any level of additive Gaussian noise before quantization as well as for a specific model of adversarial noise, while the choice b=2b = 2 is optimal for estimating the direction and the norm (scale) of the signal. Moreover, Lloyd-Max quantization is shown to be an optimal quantization scheme w.r.t. 2\ell_2-estimation error. Our analysis is corroborated by numerical experiments showing nearly perfect agreement with our theoretical predictions. The paper is complemented by an empirical comparison to alternative methods of signal recovery taking the non-linearity resulting from quantization into account. The results of that comparison point to a regime change depending on the noise level: in a low-noise setting, linear signal recovery falls short of more sophisticated competitors while being competitive in moderate- and high-noise settings
    corecore