4,896 research outputs found

    Emerging standards for still image compression: A software implementation and simulation study

    Get PDF
    The software implementation is described of an emerging standard for the lossy compression of continuous tone still images. This software program can be used to compress planetary images and other 2-D instrument data. It provides a high compression image coding capability that preserves image fidelity at compression rates competitive or superior to most known techniques. This software implementation confirms the usefulness of such data compression and allows its performance to be compared with other schemes used in deep space missions and for data based storage

    A high-speed distortionless predictive image-compression scheme

    Get PDF
    A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source

    Estimating the Algorithmic Complexity of Stock Markets

    Full text link
    Randomness and regularities in Finance are usually treated in probabilistic terms. In this paper, we develop a completely different approach in using a non-probabilistic framework based on the algorithmic information theory initially developed by Kolmogorov (1965). We present some elements of this theory and show why it is particularly relevant to Finance, and potentially to other sub-fields of Economics as well. We develop a generic method to estimate the Kolmogorov complexity of numeric series. This approach is based on an iterative "regularity erasing procedure" implemented to use lossless compression algorithms on financial data. Examples are provided with both simulated and real-world financial time series. The contributions of this article are twofold. The first one is methodological : we show that some structural regularities, invisible with classical statistical tests, can be detected by this algorithmic method. The second one consists in illustrations on the daily Dow-Jones Index suggesting that beyond several well-known regularities, hidden structure may in this index remain to be identified

    Data Streams from the Low Frequency Instrument On-Board the Planck Satellite: Statistical Analysis and Compression Efficiency

    Get PDF
    The expected data rate produced by the Low Frequency Instrument (LFI) planned to fly on the ESA Planck mission in 2007, is over a factor 8 larger than the bandwidth allowed by the spacecraft transmission system to download the LFI data. We discuss the application of lossless compression to Planck/LFI data streams in order to reduce the overall data flow. We perform both theoretical analysis and experimental tests using realistically simulated data streams in order to fix the statistical properties of the signal and the maximal compression rate allowed by several lossless compression algorithms. We studied the influence of signal composition and of acquisition parameters on the compression rate Cr and develop a semiempirical formalism to account for it. The best performing compressor tested up to now is the arithmetic compression of order 1, designed for optimizing the compression of white noise like signals, which allows an overall compression rate = 2.65 +/- 0.02. We find that such result is not improved by other lossless compressors, being the signal almost white noise dominated. Lossless compression algorithms alone will not solve the bandwidth problem but needs to be combined with other techniques.Comment: May 3, 2000 release, 61 pages, 6 figures coded as eps, 9 tables (4 included as eps), LaTeX 2.09 + assms4.sty, style file included, submitted for the pubblication on PASP May 3, 200

    JPEG steganography: A performance evaluation of quantization tables

    Get PDF
    The two most important aspects of any image based steganographic system are the imperceptibility and the capacity of the stego image. This paper evaluates the performance and efficiency of using optimized quantization tables instead of default JPEG tables within JPEG steganography. We found that using optimized tables significantly improves the quality of stego-images. Moreover, we used this optimization strategy to generate a 16x16 quantization table to be used instead of that suggested. The quality of stego-images was greatly improved when these optimized tables were used. This led us to suggest a new hybrid steganographic method in order to increase the embedding capacity. This new method is based on both and Jpeg-Jsteg methods. In this method, for each 16x16 quantized DCT block, the least two significant bits (2-LSBs) of each middle frequency coefficient are modified to embed two secret bits. Additionally, the Jpeg-Jsteg embedding technique is used for the low frequency DCT coefficients without modifying the DC coefficient. Our experimental results show that the proposed approach can provide a higher information-hiding capacity than the other methods tested. Furthermore, the quality of the produced stego-images is better than that of other methods which use the default tables
    • 

    corecore