281,622 research outputs found

    Generalized residual vector quantization for large scale data

    Full text link
    Vector quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel vector quantization framework that iteratively minimizes quantization error. First, we provide a detailed review on a relevant vector quantization method named \textit{residual vector quantization} (RVQ). Next, we propose \textit{generalized residual vector quantization} (GRVQ) to further improve over RVQ. Many vector quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of quantization accuracy and computation efficiency.Comment: published on International Conference on Multimedia and Expo 201

    Quantum-Classical Correspondence of Dynamical Observables, Quantization and the Time of Arrival Correspondence Problem

    Full text link
    We raise the problem of constructing quantum observables that have classical counterparts without quantization. Specifically we seek to define and motivate a solution to the quantum-classical correspondence problem independent from quantization and discuss the general insufficiency of prescriptive quantization, particularly the Weyl quantization. We demonstrate our points by constructing time of arrival operators without quantization and from these recover their classical counterparts

    Adaptive Quantization for Deep Neural Network

    Full text link
    In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, together with increasingly complex architectures. The performance gain of these DNNs generally comes with high computational costs and large memory consumption, which may not be affordable for mobile platforms. Deep model quantization can be used for reducing the computation and memory costs of DNNs, and deploying complex DNNs on mobile equipment. In this work, we propose an optimization framework for deep model quantization. First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, we propose an optimization process based on this measurement for finding optimal quantization bit-width for each layer. This is the first work that theoretically analyse the relationship between parameter quantization errors of individual layers and model accuracy. Our new quantization algorithm outperforms previous quantization optimization methods, and achieves 20-40% higher compression rate compared to equal bit-width quantization at the same model prediction accuracy.Comment: 9 pages main paper + 5 pages supplementary, 8 figures, conferenc

    An overview of the quantization for mixed distributions

    Get PDF
    The basic goal of quantization for probability distribution is to reduce the number of values, which is typically uncountable, describing a probability distribution to some finite set and thus approximation of a continuous probability distribution by a discrete distribution. Mixed distributions are an exciting new area for optimal quantization. In this paper, we have determined the optimal sets of nn-means, the nnth quantization error, and the quantization dimensions of different mixed distributions. Besides, we have discussed whether the quantization coefficients for the mixed distributions exist. The results in this paper will give a motivation and insight into more general problems in quantization of mixed distributions.Comment: arXiv admin note: text overlap with arXiv:1701.0416
    corecore