2,710 research outputs found
Multiresolution vector quantization
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
Analysis-by-Synthesis-based Quantization of Compressed Sensing Measurements
We consider a resource-constrained scenario where a compressed sensing- (CS)
based sensor has a low number of measurements which are quantized at a low rate
followed by transmission or storage. Applying this scenario, we develop a new
quantizer design which aims to attain a high-quality reconstruction performance
of a sparse source signal based on analysis-by-synthesis framework. Through
simulations, we compare the performance of the proposed quantization algorithm
vis-a-vis existing quantization methods.Comment: 5 pages, Published in ICASSP 201
Scalable Image Retrieval by Sparse Product Quantization
Fast Approximate Nearest Neighbor (ANN) search technique for high-dimensional
feature indexing and retrieval is the crux of large-scale image retrieval. A
recent promising technique is Product Quantization, which attempts to index
high-dimensional image features by decomposing the feature space into a
Cartesian product of low dimensional subspaces and quantizing each of them
separately. Despite the promising results reported, their quantization approach
follows the typical hard assignment of traditional quantization methods, which
may result in large quantization errors and thus inferior search performance.
Unlike the existing approaches, in this paper, we propose a novel approach
called Sparse Product Quantization (SPQ) to encoding the high-dimensional
feature vectors into sparse representation. We optimize the sparse
representations of the feature vectors by minimizing their quantization errors,
making the resulting representation is essentially close to the original data
in practice. Experiments show that the proposed SPQ technique is not only able
to compress data, but also an effective encoding technique. We obtain
state-of-the-art results for ANN search on four public image datasets and the
promising results of content-based image retrieval further validate the
efficacy of our proposed method.Comment: 12 page
Multi-User Diversity vs. Accurate Channel State Information in MIMO Downlink Channels
In a multiple transmit antenna, single antenna per receiver downlink channel
with limited channel state feedback, we consider the following question: given
a constraint on the total system-wide feedback load, is it preferable to get
low-rate/coarse channel feedback from a large number of receivers or
high-rate/high-quality feedback from a smaller number of receivers? Acquiring
feedback from many receivers allows multi-user diversity to be exploited, while
high-rate feedback allows for very precise selection of beamforming directions.
We show that there is a strong preference for obtaining high-quality feedback,
and that obtaining near-perfect channel information from as many receivers as
possible provides a significantly larger sum rate than collecting a few
feedback bits from a large number of users.Comment: Submitted to IEEE Transactions on Communications, July 200
Generalized residual vector quantization for large scale data
Vector quantization is an essential tool for tasks involving large scale
data, for example, large scale similarity search, which is crucial for
content-based information retrieval and analysis. In this paper, we propose a
novel vector quantization framework that iteratively minimizes quantization
error. First, we provide a detailed review on a relevant vector quantization
method named \textit{residual vector quantization} (RVQ). Next, we propose
\textit{generalized residual vector quantization} (GRVQ) to further improve
over RVQ. Many vector quantization methods can be viewed as the special cases
of our proposed framework. We evaluate GRVQ on several large scale benchmark
datasets for large scale search, classification and object retrieval. We
compared GRVQ with existing methods in detail. Extensive experiments
demonstrate our GRVQ framework substantially outperforms existing methods in
term of quantization accuracy and computation efficiency.Comment: published on International Conference on Multimedia and Expo 201
- …