5,151 research outputs found
Compressing High-Dimensional Data Spaces Using Non-Differential Augmented Vector Quantization
query processing times and space requirements. Database compression has been
discovered to alleviate the I/O bottleneck, reduce disk space, improve disk access speed,
speed up query, reduce overall retrieval time and increase the effective I/O bandwidth.
However, random access to individual tuples in a compressed database is very difficult to
achieve with most available compression techniques.
We propose a lossless compression technique called non-differential augmented vector
quantization, a close variant of the novel augmented vector quantization. The technique is
applicable to a collection of tuples and especially effective for tuples with many low to
medium cardinality fields. In addition, the technique supports standard database
operations, permits very fast random access and atomic decompression of tuples in large
collections. The technique maps a database relation into a static bitmap index cached
access structure. Consequently, we were able to achieve substantial savings in space by
storing each database tuple as a bit value in the computer memory.
Important distinguishing characteristics of our technique is that individual tuples can be
compressed and decompressed, rather than a full page or entire relation at a time, (b) the
information needed for tuple compression and decompression can reside in the memory or
at worst in a single page. Promising application domains include decision support systems,
statistical databases and life databases with low cardinality fields and possibly no text
field
Decoding billions of integers per second through vectorization
In many important applications -- such as search engines and relational
database systems -- data is stored in the form of arrays of integers. Encoding
and, most importantly, decoding of these arrays consumes considerable CPU time.
Therefore, substantial effort has been made to reduce costs associated with
compression and decompression. In particular, researchers have exploited the
superscalar nature of modern processors and SIMD instructions. Nevertheless, we
introduce a novel vectorized scheme called SIMD-BP128 that improves over
previously proposed vectorized approaches. It is nearly twice as fast as the
previously fastest schemes on desktop processors (varint-G8IU and PFOR). At the
same time, SIMD-BP128 saves up to 2 bits per integer. For even better
compression, we propose another new vectorized scheme (SIMD-FastPFOR) that has
a compression ratio within 10% of a state-of-the-art scheme (Simple-8b) while
being two times faster during decoding.Comment: For software, see https://github.com/lemire/FastPFor, For data, see
http://boytsov.info/datasets/clueweb09gap
- …