4,338 research outputs found
Multiresolution vector quantization
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
Bolt: Accelerated Data Mining with Fast Vector Compression
Vectors of data are at the heart of machine learning and data mining.
Recently, vector quantization methods have shown great promise in reducing both
the time and space costs of operating on vectors. We introduce a vector
quantization algorithm that can compress vectors over 12x faster than existing
techniques while also accelerating approximate vector operations such as
distance and dot product computations by up to 10x. Because it can encode over
2GB of vectors per second, it makes vector quantization cheap enough to employ
in many more circumstances. For example, using our technique to compute
approximate dot products in a nested loop can multiply matrices faster than a
state-of-the-art BLAS implementation, even when our algorithm must first
compress the matrices.
In addition to showing the above speedups, we demonstrate that our approach
can accelerate nearest neighbor search and maximum inner product search by over
100x compared to floating point operations and up to 10x compared to other
vector quantization methods. Our approximate Euclidean distance and dot product
computations are not only faster than those of related algorithms with slower
encodings, but also faster than Hamming distance computations, which have
direct hardware support on the tested platforms. We also assess the errors of
our algorithm's approximate distances and dot products, and find that it is
competitive with existing, slower vector quantization algorithms.Comment: Research track paper at KDD 201
CDDT: Fast Approximate 2D Ray Casting for Accelerated Localization
Localization is an essential component for autonomous robots. A
well-established localization approach combines ray casting with a particle
filter, leading to a computationally expensive algorithm that is difficult to
run on resource-constrained mobile robots. We present a novel data structure
called the Compressed Directional Distance Transform for accelerating ray
casting in two dimensional occupancy grid maps. Our approach allows online map
updates, and near constant time ray casting performance for a fixed size map,
in contrast with other methods which exhibit poor worst case performance. Our
experimental results show that the proposed algorithm approximates the
performance characteristics of reading from a three dimensional lookup table of
ray cast solutions while requiring two orders of magnitude less memory and
precomputation. This results in a particle filter algorithm which can maintain
2500 particles with 61 ray casts per particle at 40Hz, using a single CPU
thread onboard a mobile robot.Comment: 8 pages, 14 figures, ICRA versio
- …