3,504 research outputs found

    Optimizing Lossy Compression Rate-Distortion from Automatic Online Selection between SZ and ZFP

    Full text link
    With ever-increasing volumes of scientific data produced by HPC applications, significantly reducing data size is critical because of limited capacity of storage space and potential bottlenecks on I/O or networks in writing/reading or transferring data. SZ and ZFP are the two leading lossy compressors available to compress scientific data sets. However, their performance is not consistent across different data sets and across different fields of some data sets: for some fields SZ provides better compression performance, while other fields are better compressed with ZFP. This situation raises the need for an automatic online (during compression) selection between SZ and ZFP, with a minimal overhead. In this paper, the automatic selection optimizes the rate-distortion, an important statistical quality metric based on the signal-to-noise ratio. To optimize for rate-distortion, we investigate the principles of SZ and ZFP. We then propose an efficient online, low-overhead selection algorithm that predicts the compression quality accurately for two compressors in early processing stages and selects the best-fit compressor for each data field. We implement the selection algorithm into an open-source library, and we evaluate the effectiveness of our proposed solution against plain SZ and ZFP in a parallel environment with 1,024 cores. Evaluation results on three data sets representing about 100 fields show that our selection algorithm improves the compression ratio up to 70% with the same level of data distortion because of very accurate selection (around 99%) of the best-fit compressor, with little overhead (less than 7% in the experiments).Comment: 14 pages, 9 figures, first revisio

    ROI coding of volumetric medical images with application to visualisation

    Get PDF

    Compression and registration of 3D point clouds using GMMs

    Get PDF
    3D data sensors provide an enormous amount of information. It is necessary to develop efficient methods to manage this information under certain time, bandwidth or storage space requirements. In this work, we propose a 3D compression and decompression method. This method also allows the use of the compressed data for a registration process. First, points are selected and grouped, using a 3D-model based on planar surfaces. Next, we use a fast variant of Gaussian Mixture Models and an Expectation-Maximization algorithm to replace the points grouped in the previous step with a set of Gaussian distributions. These learned models can be used as features to find matches between two consecutive poses and apply 3D pose registration using RANSAC. Finally, the 3D map can be obtained by decompressing the models.This work has been supported by the Spanish Government TIN2016-76515-R Grant, supported with Feder funds

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data

    Full text link
    Lossy compression has become an important technique to reduce data size in many domains. This type of compression is especially valuable for large-scale scientific data, whose size ranges up to several petabytes. Although Autoencoder-based models have been successfully leveraged to compress images and videos, such neural networks have not widely gained attention in the scientific data domain. Our work presents a neural network that not only significantly compresses large-scale scientific data but also maintains high reconstruction quality. The proposed model is tested with scientific benchmark data available publicly and applied to a large-scale high-resolution climate modeling data set. Our model achieves a compression ratio of 140 on several benchmark data sets without compromising the reconstruction quality. Simulation data from the High-Resolution Community Earth System Model (CESM) Version 1.3 over 500 years are also being compressed with a compression ratio of 200 while the reconstruction error is negligible for scientific analysis.Comment: 15 pages, 15 figure

    3D Reconstruction of Small Solar System Bodies using Rendered and Compressed Images

    Get PDF
    Synthetic image generation and reconstruction of Small Solar System Bodies and the influence of compression is becoming an important study topic because of the advent of small spacecraft in deep space missions. Most of these missions are fly-by scenarios, for example in the Comet Interceptor mission. Due to limited data budgets of small satellite missions, maximising scientific return requires investigating effects of lossy compression. A preliminary simulation pipeline had been developed that uses physics-based rendering in combination with procedural terrain generation to overcome limitations of currently used methods for image rendering like the Hapke model. The rendered Small Solar System Body images are combined with a star background and photometrically calibrated to represent realistic imagery. Subsequently, a Structure-from-Motion pipeline reconstructs three-dimensional models from the rendered images. In this work, the preliminary simulation pipeline was developed further into the Space Imaging Simulator for Proximity Operations software package and a compression package was added. The compression package was used to investigate effects of lossy compression on reconstructed models and the possible amount of data reduction of lossy compression to lossless compression. Several scenarios with varying fly-by distances ranging from 50 km to 400 km and body sizes of 1 km and 10 km were simulated and compressed with lossless and several quality levels of lossy compression using PNG and JPEG 2000 respectively. It was found that low compression ratios introduce artefacts resembling random noise while high compression ratios remove surface features. The random noise artefacts introduced by low compression ratios frequently increased the number of vertices and faces of the reconstructed three-dimensional model

    Volumetric Medical Images Visualization on Mobile Devices

    Get PDF
    Volumetric medical images visualization is an important tool in the diagnosis and treatment of diseases. Through history, one of the most dificult tasks for Medicine Specialists has been the accurate location of broken bones and of the damaged tissues during Chemotherapy treatment, among other applications; like techniques used in Neurological Studies. Thus these situations enhance the need of visualization in Medicine. New technologies, the improvement and development of new hardware as well as software and the updating of old ones for graphic applications have resulted in specialized systems for medical visualization. However the use of these techniques in mobile devices has been poor due to its low performance. In our work, we propose a client-server scheme, where the model is compressed in the server side and is reconstructed in a nal thin-client device. The technique restricts the natural density values to achieve good bone visualization in medical models, transforming the rest of the data to zero. Our proposal uses a tridimensional Haar Wavelet Function locally applied inside units blocks of 16x16x16, similar to the Wavelet Based 3D Compression Scheme for Interactive Visualization of Very Large Volume Data approach. We also implement a quantization algorithm which handles error coeficients according to the frequency distributions of these coe cients. Finally, we made an evaluation of the volume visualization; on current mobile devices .We present the speci cations for the implementation of our technique in the Nokia n900 Mobile Phone
    corecore