1,248 research outputs found
Compressing High-Dimensional Data Spaces Using Non-Differential Augmented Vector Quantization
query processing times and space requirements. Database compression has been
discovered to alleviate the I/O bottleneck, reduce disk space, improve disk access speed,
speed up query, reduce overall retrieval time and increase the effective I/O bandwidth.
However, random access to individual tuples in a compressed database is very difficult to
achieve with most available compression techniques.
We propose a lossless compression technique called non-differential augmented vector
quantization, a close variant of the novel augmented vector quantization. The technique is
applicable to a collection of tuples and especially effective for tuples with many low to
medium cardinality fields. In addition, the technique supports standard database
operations, permits very fast random access and atomic decompression of tuples in large
collections. The technique maps a database relation into a static bitmap index cached
access structure. Consequently, we were able to achieve substantial savings in space by
storing each database tuple as a bit value in the computer memory.
Important distinguishing characteristics of our technique is that individual tuples can be
compressed and decompressed, rather than a full page or entire relation at a time, (b) the
information needed for tuple compression and decompression can reside in the memory or
at worst in a single page. Promising application domains include decision support systems,
statistical databases and life databases with low cardinality fields and possibly no text
field
Attribute Value Reordering For Efficient Hybrid OLAP
The normalization of a data cube is the ordering of the attribute values. For
large multidimensional arrays where dense and sparse chunks are stored
differently, proper normalization can lead to improved storage efficiency. We
show that it is NP-hard to compute an optimal normalization even for 1x3
chunks, although we find an exact algorithm for 1x2 chunks. When dimensions are
nearly statistically independent, we show that dimension-wise attribute
frequency sorting is an optimal normalization and takes time O(d n log(n)) for
data cubes of size n^d. When dimensions are not independent, we propose and
evaluate several heuristics. The hybrid OLAP (HOLAP) storage mechanism is
already 19%-30% more efficient than ROLAP, but normalization can improve it
further by 9%-13% for a total gain of 29%-44% over ROLAP
Histogram-Aware Sorting for Enhanced Word-Aligned Compression in Bitmap Indexes
Bitmap indexes must be compressed to reduce input/output costs and minimize
CPU usage. To accelerate logical operations (AND, OR, XOR) over bitmaps, we use
techniques based on run-length encoding (RLE), such as Word-Aligned Hybrid
(WAH) compression. These techniques are sensitive to the order of the rows: a
simple lexicographical sort can divide the index size by 9 and make indexes
several times faster. We investigate reordering heuristics based on computed
attribute-value histograms. Simply permuting the columns of the table based on
these histograms can increase the sorting efficiency by 40%.Comment: To appear in proceedings of DOLAP 200
Attribute Value Reordering for Efficient Hybrid OLAP
The normalization of a data cube is the process of choosing an ordering for the attribute values, and the chosen ordering will affect the physical storage of the cube's data. For large multidimensional arrays, proper normalization can lead to more efficient storage in hybrid OLAP contexts that store dense and sparse chunks differently. We show that it is NP-hard to compute an optimal normalization even for 1x3 chunks, although we find an exact algorithm for 1x2 chunks. When attributes are nearly statistically independent, we show that an optimal normalization is given by dimension-wise attribute frequency sorting, which can be done in time O(d n log(n)) for data cubes of size n^d. When attributes are not independent, we propose and evaluate a number of heuristics.\ud
\ud
Our optimized hybrid OLAP storage mechanism was observed to be 44% more storage efficient than ROLAP and the gains due to normalization alone accounted for 45% of this increase in efficiency
Universal Indexes for Highly Repetitive Document Collections
Indexing highly repetitive collections has become a relevant problem with the
emergence of large repositories of versioned documents, among other
applications. These collections may reach huge sizes, but are formed mostly of
documents that are near-copies of others. Traditional techniques for indexing
these collections fail to properly exploit their regularities in order to
reduce space.
We introduce new techniques for compressing inverted indexes that exploit
this near-copy regularity. They are based on run-length, Lempel-Ziv, or grammar
compression of the differential inverted lists, instead of the usual practice
of gap-encoding them. We show that, in this highly repetitive setting, our
compression methods significantly reduce the space obtained with classical
techniques, at the price of moderate slowdowns. Moreover, our best methods are
universal, that is, they do not need to know the versioning structure of the
collection, nor that a clear versioning structure even exists.
We also introduce compressed self-indexes in the comparison. These are
designed for general strings (not only natural language texts) and represent
the text collection plus the index structure (not an inverted index) in
integrated form. We show that these techniques can compress much further, using
a small fraction of the space required by our new inverted indexes. Yet, they
are orders of magnitude slower.Comment: This research has received funding from the European Union's Horizon
2020 research and innovation programme under the Marie Sk{\l}odowska-Curie
Actions H2020-MSCA-RISE-2015 BIRDS GA No. 69094
Lossless Astronomical Image Compression and the Effects of Noise
We compare a variety of lossless image compression methods on a large sample
of astronomical images and show how the compression ratios and speeds of the
algorithms are affected by the amount of noise in the images. In the ideal case
where the image pixel values have a random Gaussian distribution, the
equivalent number of uncompressible noise bits per pixel is given by Nbits
=log2(sigma * sqrt(12)) and the lossless compression ratio is given by R =
BITPIX / Nbits + K where BITPIX is the bit length of the pixel values and K is
a measure of the efficiency of the compression algorithm.
We perform image compression tests on a large sample of integer astronomical
CCD images using the GZIP compression program and using a newer FITS
tiled-image compression method that currently supports 4 compression
algorithms: Rice, Hcompress, PLIO, and GZIP. Overall, the Rice compression
algorithm strikes the best balance of compression and computational efficiency;
it is 2--3 times faster and produces about 1.4 times greater compression than
GZIP. The Rice algorithm produces 75%--90% (depending on the amount of noise in
the image) as much compression as an ideal algorithm with K = 0.
The image compression and uncompression utility programs used in this study
(called fpack and funpack) are publicly available from the HEASARC web site. A
simple command-line interface may be used to compress or uncompress any FITS
image file.Comment: 20 pages, 9 figures, to be published in PAS
- …