37,429 research outputs found
Quantized Compressed Sensing for Partial Random Circulant Matrices
We provide the first analysis of a non-trivial quantization scheme for
compressed sensing measurements arising from structured measurements.
Specifically, our analysis studies compressed sensing matrices consisting of
rows selected at random, without replacement, from a circulant matrix generated
by a random subgaussian vector. We quantize the measurements using stable,
possibly one-bit, Sigma-Delta schemes, and use a reconstruction method based on
convex optimization. We show that the part of the reconstruction error due to
quantization decays polynomially in the number of measurements. This is in line
with analogous results on Sigma-Delta quantization associated with random
Gaussian or subgaussian matrices, and significantly better than results
associated with the widely assumed memoryless scalar quantization. Moreover, we
prove that our approach is stable and robust; i.e., the reconstruction error
degrades gracefully in the presence of non-quantization noise and when the
underlying signal is not strictly sparse. The analysis relies on results
concerning subgaussian chaos processes as well as a variation of McDiarmid's
inequality.Comment: 15 page
Binary sampling ghost imaging: add random noise to fight quantization caused image quality decline
When the sampling data of ghost imaging is recorded with less bits, i.e.,
experiencing quantization, decline of image quality is observed. The less bits
used, the worse image one gets. Dithering, which adds suitable random noise to
the raw data before quantization, is proved to be capable of compensating image
quality decline effectively, even for the extreme binary sampling case. A brief
explanation and parameter optimization of dithering are given.Comment: 8 pages, 7 figure
Greedy vector quantization
We investigate the greedy version of the -optimal vector quantization
problem for an -valued random vector . We show the
existence of a sequence such that minimizes
(-mean quantization error at level induced by
). We show that this sequence produces -rate
optimal -tuples ( the -mean
quantization error at level induced by goes to at rate
). Greedy optimal sequences also satisfy, under natural
additional assumptions, the distortion mismatch property: the -tuples
remain rate optimal with respect to the -norms, .
Finally, we propose optimization methods to compute greedy sequences, adapted
from usual Lloyd's I and Competitive Learning Vector Quantization procedures,
either in their deterministic (implementable when ) or stochastic
versions.Comment: 31 pages, 4 figures, few typos corrected (now an extended version of
an eponym paper to appear in Journal of Approximation
Intrinsic stationarity for vector quantization: Foundation of dual quantization
International audienceWe develop a new approach to vector quantization, which guarantees an intrinsic stationarity property that also holds, in contrast to regular quantization, for non-optimal quantization grids. This goal is achieved by replacing the usual nearest neighbor projection operator for Voronoi quantization by a random splitting operator, which maps the random source to the vertices of a triangle of -simplex. In the quadratic Euclidean case, it is shown that these triangles or -simplices make up a Delaunay triangulation of the underlying grid. Furthermore, we prove the existence of an optimal grid for this Delaunay -- or dual -- quantization procedure. We also provide a stochastic optimization method to compute such optimal grids, here for higher dimensional uniform and normal distributions. A crucial feature of this new approach is the fact that it automatically leads to a second order quadrature formula for computing expectations, regardless of the optimality of the underlying grid
- …