3 research outputs found
Binary and Multi-Bit Coding for Stable Random Projections
We develop efficient binary (i.e., 1-bit) and multi-bit coding schemes for
estimating the scale parameter of -stable distributions. The work is
motivated by the recent work on one scan 1-bit compressed sensing (sparse
signal recovery) using -stable random projections, which requires
estimating of the scale parameter at bits-level. Our technique can be naturally
applied to data stream computations for estimating the -th frequency
moment. In fact, the method applies to the general scale family of
distributions, not limited to -stable distributions.
Due to the heavy-tailed nature of -stable distributions, using
traditional estimators will potentially need many bits to store each
measurement in order to ensure sufficient accuracy. Interestingly, our paper
demonstrates that, using a simple closed-form estimator with merely 1-bit
information does not result in a significant loss of accuracy if the parameter
is chosen appropriately. For example, when , 1, and 2, the
coefficients of the optimal estimation variances using full (i.e.,
infinite-bit) information are 1, 2, and 2, respectively. With the 1-bit scheme
and appropriately chosen parameters, the corresponding variance coefficients
are 1.544, , and 3.066, respectively. Theoretical tail bounds are also
provided. Using 2 or more bits per measurements reduces the estimation variance
and importantly, stabilizes the estimate so that the variance is not sensitive
to parameters. With look-up tables, the computational cost is minimal
One Scan 1-Bit Compressed Sensing
Based on -stable random projections with small , we develop a
simple algorithm for compressed sensing (sparse signal recovery) by utilizing
only the signs (i.e., 1-bit) of the measurements. Using only 1-bit information
of the measurements results in substantial cost reduction in collection,
storage, communication, and decoding for compressed sensing. The proposed
algorithm is efficient in that the decoding procedure requires only one scan of
the coordinates. Our analysis can precisely show that, for a -sparse signal
of length , measurements (where is the
confidence) would be sufficient for recovering the support and the signs of the
signal. While the method is very robust against typical measurement noises, we
also provide the analysis of the scheme under random flipping of the signs of
the measurements.
\noindent Compared to the well-known work on 1-bit marginal regression (which
can also be viewed as a one-scan method), the proposed algorithm requires
orders of magnitude fewer measurements. Compared to 1-bit Iterative Hard
Thresholding (IHT) (which is not a one-scan algorithm), our method is still
significantly more accurate. Furthermore, the proposed method is reasonably
robust against random sign flipping while IHT is known to be very sensitive to
this type of noise
Linear signal recovery from -bit-quantized linear measurements: precise analysis of the trade-off between bit depth and number of measurements
We consider the problem of recovering a high-dimensional structured signal
from independent Gaussian linear measurements each of which is quantized to
bits. Our interest is in linear approaches to signal recovery, where "linear"
means that non-linearity resulting from quantization is ignored and the
observations are treated as if they arose from a linear measurement model.
Specifically, the focus is on a generalization of a method for one-bit
observations due to Plan and Vershynin [\emph{IEEE~Trans. Inform. Theory,
\textbf{59} (2013), 482--494}]. At the heart of the present paper is a precise
characterization of the optimal trade-off between the number of measurements
and the bit depth per measurement given a total budget of bits when the goal is to minimize the -error in estimating the
signal. It turns out that the choice is optimal for estimating the unit
vector (direction) corresponding to the signal for any level of additive
Gaussian noise before quantization as well as for a specific model of
adversarial noise, while the choice is optimal for estimating the
direction and the norm (scale) of the signal. Moreover, Lloyd-Max quantization
is shown to be an optimal quantization scheme w.r.t. -estimation error.
Our analysis is corroborated by numerical experiments showing nearly perfect
agreement with our theoretical predictions. The paper is complemented by an
empirical comparison to alternative methods of signal recovery taking the
non-linearity resulting from quantization into account. The results of that
comparison point to a regime change depending on the noise level: in a
low-noise setting, linear signal recovery falls short of more sophisticated
competitors while being competitive in moderate- and high-noise settings