72,956 research outputs found

    Significantly Improving Lossy Compression for Scientific Data Sets Based on Multidimensional Prediction and Error-Controlled Quantization

    Full text link
    Today's HPC applications are producing extremely large amounts of data, such that data storage and analysis are becoming more challenging for scientific research. In this work, we design a new error-controlled lossy compression algorithm for large-scale scientific data. Our key contribution is significantly improving the prediction hitting rate (or prediction accuracy) for each data point based on its nearby data values along multiple dimensions. We derive a series of multilayer prediction formulas and their unified formula in the context of data compression. One serious challenge is that the data prediction has to be performed based on the preceding decompressed values during the compression in order to guarantee the error bounds, which may degrade the prediction accuracy in turn. We explore the best layer for the prediction by considering the impact of compression errors on the prediction accuracy. Moreover, we propose an adaptive error-controlled quantization encoder, which can further improve the prediction hitting rate considerably. The data size can be reduced significantly after performing the variable-length encoding because of the uneven distribution produced by our quantization encoder. We evaluate the new compressor on production scientific data sets and compare it with many other state-of-the-art compressors: GZIP, FPZIP, ZFP, SZ-1.1, and ISABELA. Experiments show that our compressor is the best in class, especially with regard to compression factors (or bit-rates) and compression errors (including RMSE, NRMSE, and PSNR). Our solution is better than the second-best solution by more than a 2x increase in the compression factor and 3.8x reduction in the normalized root mean squared error on average, with reasonable error bounds and user-desired bit-rates.Comment: Accepted by IPDPS'17, 11 pages, 10 figures, double colum

    Intelligent sampling for the measurement of structured surfaces

    Get PDF
    Uniform sampling in metrology has known drawbacks such as coherent spectral aliasing and a lack of efficiency in terms of measuring time and data storage. The requirement for intelligent sampling strategies has been outlined over recent years, particularly where the measurement of structured surfaces is concerned. Most of the present research on intelligent sampling has focused on dimensional metrology using coordinate-measuring machines with little reported on the area of surface metrology. In the research reported here, potential intelligent sampling strategies for surface topography measurement of structured surfaces are investigated by using numerical simulation and experimental verification. The methods include the jittered uniform method, low-discrepancy pattern sampling and several adaptive methods which originate from computer graphics, coordinate metrology and previous research by the authors. By combining the use of advanced reconstruction methods and feature-based characterization techniques, the measurement performance of the sampling methods is studied using case studies. The advantages, stability and feasibility of these techniques for practical measurements are discussed

    Wavelet-based Adaptive Techniques Applied to Turbulent Hypersonic Scramjet Intake Flows

    Full text link
    The simulation of hypersonic flows is computationally demanding due to large gradients of the flow variables caused by strong shock waves and thick boundary or shear layers. The resolution of those gradients imposes the use of extremely small cells in the respective regions. Taking turbulence into account intensives the variation in scales even more. Furthermore, hypersonic flows have been shown to be extremely grid sensitive. For the simulation of three-dimensional configurations of engineering applications, this results in a huge amount of cells and prohibitive computational time. Therefore, modern adaptive techniques can provide a gain with respect to computational costs and accuracy, allowing the generation of locally highly resolved flow regions where they are needed and retaining an otherwise smooth distribution. An h-adaptive technique based on wavelets is employed for the solution of hypersonic flows. The compressible Reynolds averaged Navier-Stokes equations are solved using a differential Reynolds stress turbulence model, well suited to predict shock-wave-boundary-layer interactions in high enthalpy flows. Two test cases are considered: a compression corner and a scramjet intake. The compression corner is a classical test case in hypersonic flow investigations because it poses a shock-wave-turbulent-boundary-layer interaction problem. The adaptive procedure is applied to a two-dimensional confguration as validation. The scramjet intake is firstly computed in two dimensions. Subsequently a three-dimensional geometry is considered. Both test cases are validated with experimental data and compared to non-adaptive computations. The results show that the use of an adaptive technique for hypersonic turbulent flows at high enthalpy conditions can strongly improve the performance in terms of memory and CPU time while at the same time maintaining the required accuracy of the results.Comment: 26 pages, 29 Figures, submitted to AIAA Journa

    Scalable wavelet-based coding of irregular meshes with interactive region-of-interest support

    Get PDF
    This paper proposes a novel functionality in wavelet-based irregular mesh coding, which is interactive region-of-interest (ROI) support. The proposed approach enables the user to define the arbitrary ROIs at the decoder side and to prioritize and decode these regions at arbitrarily high-granularity levels. In this context, a novel adaptive wavelet transform for irregular meshes is proposed, which enables: 1) varying the resolution across the surface at arbitrarily fine-granularity levels and 2) dynamic tiling, which adapts the tile sizes to the local sampling densities at each resolution level. The proposed tiling approach enables a rate-distortion-optimal distribution of rate across spatial regions. When limiting the highest resolution ROI to the visible regions, the fine granularity of the proposed adaptive wavelet transform reduces the required amount of graphics memory by up to 50%. Furthermore, the required graphics memory for an arbitrary small ROI becomes negligible compared to rendering without ROI support, independent of any tiling decisions. Random access is provided by a novel dynamic tiling approach, which proves to be particularly beneficial for large models of over 10(6) similar to 10(7) vertices. The experiments show that the dynamic tiling introduces a limited lossless rate penalty compared to an equivalent codec without ROI support. Additionally, rate savings up to 85% are observed while decoding ROIs of tens of thousands of vertices

    Finite element analysis of forward extrusion of 1010 steel

    Get PDF
    Reliability of FE simulation of metal forming processes depends critically on the proper definition of material properties, the friction boundary conditions and details of the FE approach. To address these issues, the room temperature strain hardening behaviour of 1010 steel was established by performing a uniaxial compression test for the true strain of up to 1.5. Friction was evaluated using a ring test, with the two faces of the ring coated with a phosphate conversion layer and soap; the friction experimental results were matched with the FE established reference curves. The experimentally obtained material and friction input data were used in FE simulation, employing Arbitrary Lagrangian Eulerian adaptive meshing, to provide a valuable insight into the process of forward extrusion of an industrial component

    Colour volumetric compression for realistic view synthesis applications

    Get PDF
    • …
    corecore