162,052 research outputs found

    Optimization of a new digital image compression algorithm based on nonlinear dynamical systems

    Get PDF
    In this paper we discuss the formulation, research and development of an optimization process for a new compression algorithm known as DYNAMAC, which has its basis in the nonlinear systems theory. We establish that by increasing the measure of randomness of the signal, the peak signal to noise ratio and in turn the quality of compression can be improved to a great extent. This measure, entropy, through exhaustive testing, will be linked to peak signal to noise ratio (PSNR, a measure of quality) and by various discussions and inferences we will establish that this measure would independently drive the compression process towards optimization. We will also introduce an Adaptive Huffman Algorithm to add to the compression ratio of the current algorithm without incurring any losses during transmission (Huffman being a lossless scheme)

    Electron Cloud Mitigation by Fast Bunch Compression in the CERN PS

    Get PDF
    A fast transverse instability has been observed with nominal LHC beams in the CERN Proton Synchrotron (PS) in 2006. The instability develops within less than 1 ms, starting when the bunch length decreases below a threshold of 11.5 ns during the RF procedure to shorten the bunches immediately prior to extraction. An alternative longitudinal beam manipulation, double bunch rotation, has been proposed to compress the bunches from 14 ns to the 4 ns required at extraction within 0.9 ms, saving some 4.5 ms with respect to the present compression scheme. The resultant bunch length is found to be equivalent for both schemes. In addition, electron cloud and vacuum measurements confirm that the development of an electron cloud and the onset of an associated fast pressure rise are delayed with the new compression scheme. Beam dynamics simulations and measurements of the double bunch rotation are presented as well as evidence for its beneficial effect from the electron cloud standpoint

    Compression Methods for Structured Floating-Point Data and their Application in Climate Research

    Get PDF
    The use of new technologies, such as GPU boosters, have led to a dramatic increase in the computing power of High-Performance Computing (HPC) centres. This development, coupled with new climate models that can better utilise this computing power thanks to software development and internal design, led to the bottleneck moving from solving the differential equations describing Earth’s atmospheric interactions to actually storing the variables. The current approach to solving the storage problem is inadequate: either the number of variables to be stored is limited or the temporal resolution of the output is reduced. If it is subsequently determined that another vari- able is required which has not been saved, the simulation must run again. This thesis deals with the development of novel compression algorithms for structured floating-point data such as climate data so that they can be stored in full resolution. Compression is performed by decorrelation and subsequent coding of the data. The decorrelation step eliminates redundant information in the data. During coding, the actual compression takes place and the data is written to disk. A lossy compression algorithm additionally has an approx- imation step to unify the data for better coding. The approximation step reduces the complexity of the data for the subsequent coding, e.g. by using quantification. This work makes a new scientific contribution to each of the three steps described above. This thesis presents a novel lossy compression method for time-series data using an Auto Regressive Integrated Moving Average (ARIMA) model to decorrelate the data. In addition, the concept of information spaces and contexts is presented to use information across dimensions for decorrela- tion. Furthermore, a new coding scheme is described which reduces the weaknesses of the eXclusive-OR (XOR) difference calculation and achieves a better compression factor than current lossless compression methods for floating-point numbers. Finally, a modular framework is introduced that allows the creation of user-defined compression algorithms. The experiments presented in this thesis show that it is possible to in- crease the information content of lossily compressed time-series data by applying an adaptive compression technique which preserves selected data with higher precision. An analysis for lossless compression of these time- series has shown no success. However, the lossy ARIMA compression model proposed here is able to capture all relevant information. The reconstructed data can reproduce the time-series to such an extent that statistically rele- vant information for the description of climate dynamics is preserved. Experiments indicate that there is a significant dependence of the com- pression factor on the selected traversal sequence and the underlying data model. The influence of these structural dependencies on prediction-based compression methods is investigated in this thesis. For this purpose, the concept of Information Spaces (IS) is introduced. IS contributes to improv- ing the predictions of the individual predictors by nearly 10% on average. Perhaps more importantly, the standard deviation of compression results is on average 20% lower. Using IS provides better predictions and consistent compression results. Furthermore, it is shown that shifting the prediction and true value leads to a better compression factor with minimal additional computational costs. This allows the use of more resource-efficient prediction algorithms to achieve the same or better compression factor or higher throughput during compression or decompression. The coding scheme proposed here achieves a better compression factor than current state-of-the-art methods. Finally, this paper presents a modular framework for the development of compression algorithms. The framework supports the creation of user- defined predictors and offers functionalities such as the execution of bench- marks, the random subdivision of n-dimensional data, the quality evalua- tion of predictors, the creation of ensemble predictors and the execution of validity tests for sequential and parallel compression algorithms. This research was initiated because of the needs of climate science, but the application of its contributions is not limited to it. The results of this the- sis are of major benefit to develop and improve any compression algorithm for structured floating-point data

    FPC: A New Approach to Firewall Policies Compression

    Get PDF
    Firewalls are crucial elements that enhance network security by examining the field values of every packet and deciding whether to accept or discard a packet according to the firewall policies. With the development of networks, the number of rules in firewalls has rapidly increased, consequently degrading network performance. In addition, because most real-life firewalls have been plagued with policy conflicts, malicious traffics can be allowed or legitimate traffics can be blocked. Moreover, because of the complexity of the firewall policies, it is very important to reduce the number of rules in a firewall while keeping the rule semantics unchanged and the target firewall rules conflict-free. In this study, we make three major contributions. First, we present a new approach in which a geometric model, multidimensional rectilinear polygon, is constructed for the firewall rules compression problem. Second, we propose a new scheme, Firewall Policies Compression (FPC), to compress the multidimensional firewall rules based on this geometric model. Third, we conducted extensive experiments to evaluate the performance of the proposed method. The experimental results demonstrate that the FPC method outperforms the existing approaches, in terms of compression ratio and efficiency while maintaining conflict-free firewall rules

    FPC: A New Approach to Firewall Policies Compression

    Get PDF
    Firewalls are crucial elements that enhance network security by examining the field values of every packet and deciding whether to accept or discard a packet according to the firewall policies. With the development of networks, the number of rules in firewalls has rapidly increased, consequently degrading network performance. In addition, because most real-life firewalls have been plagued with policy conflicts, malicious traffics can be allowed or legitimate traffics can be blocked. Moreover, because of the complexity of the firewall policies, it is very important to reduce the number of rules in a firewall while keeping the rule semantics unchanged and the target firewall rules conflict-free. In this study, we make three major contributions. First, we present a new approach in which a geometric model, multidimensional rectilinear polygon, is constructed for the firewall rules compression problem. Second, we propose a new scheme, Firewall Policies Compression (FPC), to compress the multidimensional firewall rules based on this geometric model. Third, we conducted extensive experiments to evaluate the performance of the proposed method. The experimental results demonstrate that the FPC method outperforms the existing approaches, in terms of compression ratio and efficiency while maintaining conflict-free firewall rules

    3D video compression based on high efficiency video coding

    Get PDF
    With the advent of autostereoscopic displays, questions rise on how to efficiently compress the video information needed by such displays. Additionally, for gradual market acceptance of this new technology it is valuable to have a solution offering forward compatibility with stereo 3D video as it is used nowadays. In this paper, a multiview compression scheme making use of the efficient single-view coding tools used in High Efficiency Video Coding (HEVC) is provided. Although efficient single view compression can be obtained with HEVC, a multiview adaptation of this standard under development is proposed, offering additional coding gains. On average, for the texture information, the total bitrate can be reduced by 37.2% compared to simulcast HEVC. For depth map compression, gains largely depend on the quality of the captured content. Additionally, a forward compatible solution is proposed offering the possibility for a gradual upgrade from H.264/AVC based stereoscopic 3D systems to an HEVC-based autostereoscopic environment. With the proposed system, significant rate savings compared to Multiview Video Coding (MVC) are presented(1)

    Robust Watermarking through Dual Band IWT and Chinese Remainder Theorem

    Get PDF
    CRT was a widely used algorithm in the development of watermarking methods. The algorithm produced good image quality but it had low robustness against compression and filtering. This paper proposed a new watermarking scheme through dual band IWT to improve the robustness and preserving the image quality. The high frequency sub band was used to index the embedding location on the low frequency sub band. In robustness test, the CRT method resulted average NC value of 0.7129, 0.4846, and 0.6768 while the proposed method had higher NC value of 0.7902, 0.7473, and 0.8163 in corresponding Gaussian filter, JPEG, and JPEG2000 compression test. Meanwhile the both CRT and proposed method had similar average SSIM value of 0.9979 and 0.9960 respectively in term of image quality. The result showed that the proposed method was able to improve the robustness and maintaining the image quality
    corecore