1 research outputs found
Memoryless scalar quantization for random frames
Memoryless scalar quantization (MSQ) is a common technique to quantize frame
coefficients of signals (which are used as a model for generalized linear
samples), making them compatible with our digital technology. The process of
quantization is generally not invertible, and thus one can only recover an
approximation to the original signal from its quantized coefficients. The
non-linear nature of quantization makes the analysis of the corresponding
approximation error challenging, often resulting in the use of a simplifying
assumption, called the "white noise hypothesis" (WNH) that simplifies this
analysis. However, the WNH is known to be not rigorous and, at least in certain
cases, not valid.
Given a fixed, deterministic signal, we assume that we use a random frame,
whose analysis matrix has independent isotropic sub-Gaussian rows, to collect
the measurements, which are consecutively quantized via MSQ. For this setting,
the numerically observed decay rate seems to agree with the prediction by the
WNH. We rigorously establish sharp non-asymptotic error bounds without using
the WNH that explains the observed decay rate. Furthermore, we show that the
reconstruction error does not necessarily diminish as redundancy increases. We
also extend this approach to the compressed sensing setting, obtaining rigorous
error bounds that agree with empirical observations, again, without resorting
to the WNH