37,429 research outputs found

    Quantized Compressed Sensing for Partial Random Circulant Matrices

    Full text link
    We provide the first analysis of a non-trivial quantization scheme for compressed sensing measurements arising from structured measurements. Specifically, our analysis studies compressed sensing matrices consisting of rows selected at random, without replacement, from a circulant matrix generated by a random subgaussian vector. We quantize the measurements using stable, possibly one-bit, Sigma-Delta schemes, and use a reconstruction method based on convex optimization. We show that the part of the reconstruction error due to quantization decays polynomially in the number of measurements. This is in line with analogous results on Sigma-Delta quantization associated with random Gaussian or subgaussian matrices, and significantly better than results associated with the widely assumed memoryless scalar quantization. Moreover, we prove that our approach is stable and robust; i.e., the reconstruction error degrades gracefully in the presence of non-quantization noise and when the underlying signal is not strictly sparse. The analysis relies on results concerning subgaussian chaos processes as well as a variation of McDiarmid's inequality.Comment: 15 page

    Binary sampling ghost imaging: add random noise to fight quantization caused image quality decline

    Full text link
    When the sampling data of ghost imaging is recorded with less bits, i.e., experiencing quantization, decline of image quality is observed. The less bits used, the worse image one gets. Dithering, which adds suitable random noise to the raw data before quantization, is proved to be capable of compensating image quality decline effectively, even for the extreme binary sampling case. A brief explanation and parameter optimization of dithering are given.Comment: 8 pages, 7 figure

    Greedy vector quantization

    Get PDF
    We investigate the greedy version of the LpL^p-optimal vector quantization problem for an Rd\mathbb{R}^d-valued random vector X ⁣LpX\!\in L^p. We show the existence of a sequence (aN)N1(a_N)_{N\ge 1} such that aNa_N minimizes amin1iN1XaiXaLpa\mapsto\big \|\min_{1\le i\le N-1}|X-a_i|\wedge |X-a|\big\|_{L^p} (LpL^p-mean quantization error at level NN induced by (a1,,aN1,a)(a_1,\ldots,a_{N-1},a)). We show that this sequence produces LpL^p-rate optimal NN-tuples a(N)=(a1,,aN)a^{(N)}=(a_1,\ldots,a_{_N}) (i.e.i.e. the LpL^p-mean quantization error at level NN induced by a(N)a^{(N)} goes to 00 at rate N1dN^{-\frac 1d}). Greedy optimal sequences also satisfy, under natural additional assumptions, the distortion mismatch property: the NN-tuples a(N)a^{(N)} remain rate optimal with respect to the LqL^q-norms, pq<p+dp\le q <p+d. Finally, we propose optimization methods to compute greedy sequences, adapted from usual Lloyd's I and Competitive Learning Vector Quantization procedures, either in their deterministic (implementable when d=1d=1) or stochastic versions.Comment: 31 pages, 4 figures, few typos corrected (now an extended version of an eponym paper to appear in Journal of Approximation

    Intrinsic stationarity for vector quantization: Foundation of dual quantization

    Get PDF
    International audienceWe develop a new approach to vector quantization, which guarantees an intrinsic stationarity property that also holds, in contrast to regular quantization, for non-optimal quantization grids. This goal is achieved by replacing the usual nearest neighbor projection operator for Voronoi quantization by a random splitting operator, which maps the random source to the vertices of a triangle of dd-simplex. In the quadratic Euclidean case, it is shown that these triangles or dd-simplices make up a Delaunay triangulation of the underlying grid. Furthermore, we prove the existence of an optimal grid for this Delaunay -- or dual -- quantization procedure. We also provide a stochastic optimization method to compute such optimal grids, here for higher dimensional uniform and normal distributions. A crucial feature of this new approach is the fact that it automatically leads to a second order quadrature formula for computing expectations, regardless of the optimality of the underlying grid
    corecore