27 research outputs found
Fixed-PSNR Lossy Compression for Scientific Data
Error-controlled lossy compression has been studied for years because of
extremely large volumes of data being produced by today's scientific
simulations. None of existing lossy compressors, however, allow users to fix
the peak signal-to-noise ratio (PSNR) during compression, although PSNR has
been considered as one of the most significant indicators to assess compression
quality. In this paper, we propose a novel technique providing a fixed-PSNR
lossy compression for scientific data sets. We implement our proposed method
based on the SZ lossy compression framework and release the code as an
open-source toolkit. We evaluate our fixed-PSNR compressor on three real-world
high-performance computing data sets. Experiments show that our solution has a
high accuracy in controlling PSNR, with an average deviation of 0.1 ~ 5.0 dB on
the tested data sets.Comment: 5 pages, 2 figures, 2 tables, accepted by IEEE Cluster'18. arXiv
admin note: text overlap with arXiv:1806.0890
Significantly Improving Lossy Compression for Scientific Data Sets Based on Multidimensional Prediction and Error-Controlled Quantization
Today's HPC applications are producing extremely large amounts of data, such
that data storage and analysis are becoming more challenging for scientific
research. In this work, we design a new error-controlled lossy compression
algorithm for large-scale scientific data. Our key contribution is
significantly improving the prediction hitting rate (or prediction accuracy)
for each data point based on its nearby data values along multiple dimensions.
We derive a series of multilayer prediction formulas and their unified formula
in the context of data compression. One serious challenge is that the data
prediction has to be performed based on the preceding decompressed values
during the compression in order to guarantee the error bounds, which may
degrade the prediction accuracy in turn. We explore the best layer for the
prediction by considering the impact of compression errors on the prediction
accuracy. Moreover, we propose an adaptive error-controlled quantization
encoder, which can further improve the prediction hitting rate considerably.
The data size can be reduced significantly after performing the variable-length
encoding because of the uneven distribution produced by our quantization
encoder. We evaluate the new compressor on production scientific data sets and
compare it with many other state-of-the-art compressors: GZIP, FPZIP, ZFP,
SZ-1.1, and ISABELA. Experiments show that our compressor is the best in class,
especially with regard to compression factors (or bit-rates) and compression
errors (including RMSE, NRMSE, and PSNR). Our solution is better than the
second-best solution by more than a 2x increase in the compression factor and
3.8x reduction in the normalized root mean squared error on average, with
reasonable error bounds and user-desired bit-rates.Comment: Accepted by IPDPS'17, 11 pages, 10 figures, double colum
Optimizing Lossy Compression Rate-Distortion from Automatic Online Selection between SZ and ZFP
With ever-increasing volumes of scientific data produced by HPC applications,
significantly reducing data size is critical because of limited capacity of
storage space and potential bottlenecks on I/O or networks in writing/reading
or transferring data. SZ and ZFP are the two leading lossy compressors
available to compress scientific data sets. However, their performance is not
consistent across different data sets and across different fields of some data
sets: for some fields SZ provides better compression performance, while other
fields are better compressed with ZFP. This situation raises the need for an
automatic online (during compression) selection between SZ and ZFP, with a
minimal overhead. In this paper, the automatic selection optimizes the
rate-distortion, an important statistical quality metric based on the
signal-to-noise ratio. To optimize for rate-distortion, we investigate the
principles of SZ and ZFP. We then propose an efficient online, low-overhead
selection algorithm that predicts the compression quality accurately for two
compressors in early processing stages and selects the best-fit compressor for
each data field. We implement the selection algorithm into an open-source
library, and we evaluate the effectiveness of our proposed solution against
plain SZ and ZFP in a parallel environment with 1,024 cores. Evaluation results
on three data sets representing about 100 fields show that our selection
algorithm improves the compression ratio up to 70% with the same level of data
distortion because of very accurate selection (around 99%) of the best-fit
compressor, with little overhead (less than 7% in the experiments).Comment: 14 pages, 9 figures, first revisio
Deep Hierarchical Super-Resolution for Scientific Data Reduction and Visualization
We present an approach for hierarchical super resolution (SR) using neural
networks on an octree data representation. We train a hierarchy of neural
networks, each capable of 2x upscaling in each spatial dimension between two
levels of detail, and use these networks in tandem to facilitate large scale
factor super resolution, scaling with the number of trained networks. We
utilize these networks in a hierarchical super resolution algorithm that
upscales multiresolution data to a uniform high resolution without introducing
seam artifacts on octree node boundaries. We evaluate application of this
algorithm in a data reduction framework by dynamically downscaling input data
to an octree-based data structure to represent the multiresolution data before
compressing for additional storage reduction. We demonstrate that our approach
avoids seam artifacts common to multiresolution data formats, and show how
neural network super resolution assisted data reduction can preserve global
features better than compressors alone at the same compression ratios
CEAZ: Accelerating Parallel I/O via Hardware-Algorithm Co-Design of Efficient and Adaptive Lossy Compression
As supercomputers continue to grow to exascale, the amount of data that needs
to be saved or transmitted is exploding. To this end, many previous works have
studied using error-bounded lossy compressors to reduce the data size and
improve the I/O performance. However, little work has been done for effectively
offloading lossy compression onto FPGA-based SmartNICs to reduce the
compression overhead. In this paper, we propose a hardware-algorithm co-design
of efficient and adaptive lossy compressor for scientific data on FPGAs (called
CEAZ) to accelerate parallel I/O. Our contribution is fourfold: (1) We propose
an efficient Huffman coding approach that can adaptively update Huffman
codewords online based on codewords generated offline (from a variety of
representative scientific datasets). (2) We derive a theoretical analysis to
support a precise control of compression ratio under an error-bounded
compression mode, enabling accurate offline Huffman codewords generation. This
also helps us create a fixed-ratio compression mode for consistent throughput.
(3) We develop an efficient compression pipeline by adopting cuSZ's
dual-quantization algorithm to our hardware use case. (4) We evaluate CEAZ on
five real-world datasets with both a single FPGA board and 128 nodes from
Bridges-2 supercomputer. Experiments show that CEAZ outperforms the second-best
FPGA-based lossy compressor by 2X of throughput and 9.6X of compression ratio.
It also improves MPI_File_write and MPI_Gather throughputs by up to 25.8X and
24.8X, respectively.Comment: 14 pages, 17 figures, 8 table