167,636 research outputs found

    Generalized residual vector quantization for large scale data

    Full text link
    Vector quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel vector quantization framework that iteratively minimizes quantization error. First, we provide a detailed review on a relevant vector quantization method named \textit{residual vector quantization} (RVQ). Next, we propose \textit{generalized residual vector quantization} (GRVQ) to further improve over RVQ. Many vector quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of quantization accuracy and computation efficiency.Comment: published on International Conference on Multimedia and Expo 201

    Greedy vector quantization

    Get PDF
    We investigate the greedy version of the LpL^p-optimal vector quantization problem for an Rd\mathbb{R}^d-valued random vector X ⁣LpX\!\in L^p. We show the existence of a sequence (aN)N1(a_N)_{N\ge 1} such that aNa_N minimizes amin1iN1XaiXaLpa\mapsto\big \|\min_{1\le i\le N-1}|X-a_i|\wedge |X-a|\big\|_{L^p} (LpL^p-mean quantization error at level NN induced by (a1,,aN1,a)(a_1,\ldots,a_{N-1},a)). We show that this sequence produces LpL^p-rate optimal NN-tuples a(N)=(a1,,aN)a^{(N)}=(a_1,\ldots,a_{_N}) (i.e.i.e. the LpL^p-mean quantization error at level NN induced by a(N)a^{(N)} goes to 00 at rate N1dN^{-\frac 1d}). Greedy optimal sequences also satisfy, under natural additional assumptions, the distortion mismatch property: the NN-tuples a(N)a^{(N)} remain rate optimal with respect to the LqL^q-norms, pq<p+dp\le q <p+d. Finally, we propose optimization methods to compute greedy sequences, adapted from usual Lloyd's I and Competitive Learning Vector Quantization procedures, either in their deterministic (implementable when d=1d=1) or stochastic versions.Comment: 31 pages, 4 figures, few typos corrected (now an extended version of an eponym paper to appear in Journal of Approximation

    Vector quantization

    Get PDF
    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts

    Semilogarithmic Nonuniform Vector Quantization of Two-Dimensional Laplacean Source for Small Variance Dynamics

    Get PDF
    In this paper high dynamic range nonuniform two-dimensional vector quantization model for Laplacean source was provided. Semilogarithmic A-law compression characteristic was used as radial scalar compression characteristic of two-dimensional vector quantization. Optimal number value of concentric quantization domains (amplitude levels) is expressed in the function of parameter A. Exact distortion analysis with obtained closed form expressions is provided. It has been shown that proposed model provides high SQNR values in wide range of variances, and overachieves quality obtained by scalar A-law quantization at same bit rate, so it can be used in various switching and adaptation implementations for realization of high quality signal compression
    corecore