1,271 research outputs found

    Lossless Compression of Predicted Floating-Point Geometry

    Get PDF
    The sizeof geometric data sets in scientific and industrial applications is constantly increasing. Storing surfng or volume meshes in standard uncompressedf ormats results in large files that are expensive to store and slow to load and transmit. Scientists and engineersofne refeer ff using mesh compression because currently available schemes modif the mesh data. While connectivity is encoded in a lossless manner, the floating-point coordinates associated with the vertices are quantized onto aunif6: integer grid to enable e#cient predictive compression. Although a fine enough grid can usually represent the data with su#cient precision, the original floating-point values will change, regardless of grid resolution. In this paper we describe a methodf or compressing floating-point coordinates with predictive coding in a completely lossless manner. The initial quantization step is omitted and predictions are calculated in floating-point. The predicted and the actual floating-point values are broken up into sign, exponent, and mantissa and their corrections are compressed separately with context-based arithmetic coding. As the quality of the predictions varies with the exponent, we use the exponent to switch between di#erent arithmetic contexts. We report compression results using the popular parallelogram predictor, but our approach will work with any prediction scheme. The achieved bit-ratesf or lossless floating-point compression nicely complement those resultingfsu unifting quantizing with di#erent precisions

    Haptic Data Transmission Based on the Prediction and Compression

    Get PDF

    Lossless SIMD Compression of LiDAR Range and Attribute Scan Sequences

    Full text link
    As LiDAR sensors have become ubiquitous, the need for an efficient LiDAR data compression algorithm has increased. Modern LiDARs produce gigabytes of scan data per hour and are often used in applications with limited compute, bandwidth, and storage resources. We present a fast, lossless compression algorithm for LiDAR range and attribute scan sequences including multiple-return range, signal, reflectivity, and ambient infrared. Our algorithm -- dubbed "Jiffy" -- achieves substantial compression by exploiting spatiotemporal redundancy and sparsity. Speed is accomplished by maximizing use of single-instruction-multiple-data (SIMD) instructions. In autonomous driving, infrastructure monitoring, drone inspection, and handheld mapping benchmarks, the Jiffy algorithm consistently outcompresses competing lossless codecs while operating at speeds in excess of 65M points/sec on a single core. In a typical autonomous vehicle use case, single-threaded Jiffy achieves 6x compression of centimeter-precision range scans at 500+ scans per second. To ensure reproducibility and enable adoption, the software is freely available as an open source library

    Lightweight super resolution network for point cloud geometry compression

    Full text link
    This paper presents an approach for compressing point cloud geometry by leveraging a lightweight super-resolution network. The proposed method involves decomposing a point cloud into a base point cloud and the interpolation patterns for reconstructing the original point cloud. While the base point cloud can be efficiently compressed using any lossless codec, such as Geometry-based Point Cloud Compression, a distinct strategy is employed for handling the interpolation patterns. Rather than directly compressing the interpolation patterns, a lightweight super-resolution network is utilized to learn this information through overfitting. Subsequently, the network parameter is transmitted to assist in point cloud reconstruction at the decoder side. Notably, our approach differentiates itself from lookup table-based methods, allowing us to obtain more accurate interpolation patterns by accessing a broader range of neighboring voxels at an acceptable computational cost. Experiments on MPEG Cat1 (Solid) and Cat2 datasets demonstrate the remarkable compression performance achieved by our method.Comment: 10 pages, 3 figures, 2 tables, and 27 reference

    TopoSZ: Preserving Topology in Error-Bounded Lossy Compression

    Full text link
    Existing error-bounded lossy compression techniques control the pointwise error during compression to guarantee the integrity of the decompressed data. However, they typically do not explicitly preserve the topological features in data. When performing post hoc analysis with decompressed data using topological methods, preserving topology in the compression process to obtain topologically consistent and correct scientific insights is desirable. In this paper, we introduce TopoSZ, an error-bounded lossy compression method that preserves the topological features in 2D and 3D scalar fields. Specifically, we aim to preserve the types and locations of local extrema as well as the level set relations among critical points captured by contour trees in the decompressed data. The main idea is to derive topological constraints from contour-tree-induced segmentation from the data domain, and incorporate such constraints with a customized error-controlled quantization strategy from the classic SZ compressor.Our method allows users to control the pointwise error and the loss of topological features during the compression process with a global error bound and a persistence threshold

    Lossy Compression and Its Application on Large Scale Scientific Datasets

    Get PDF
    High Performance Computing (HPC) applications are always expanding in data size and computational complexity. It is becoming necessary to consider fault tolerance and system recovery to reduce computation and resource cost in HPC systems. The computation of modern large scale HPC applications are facing bottleneck due to computation complexities, increased runtime and large data storage requirements. These issues can not be ignored in current supercomputing era. Data compression is one of the effective ways to address data storage issue. Among data compression, the lossy compression is much more feasible and efficient than the traditional lossless compression due to low I/O bandwidth of large applications. The goal of this work is to observe and find the optimal lossy compression configuration which has the minimal user controlled error with maximum compression ratio. For this purpose two large scale application have been experimented with various parameters of well known compression method called SZ. The first application is a quantum chemistry based HPC application NWChem. The second application is the vascular blood flow simulation data generated by parallel lattice Boltzmann code for fluid flow simulations with complex geometries called HemeLB. SZ compressor is integrated in the applications\u27 code for testing the correctness and scalability and give a comparative picture of the performance change. Lastly the statistical methods are tested to pre-determine the data distortion for different error bounds
    • …
    corecore