163 research outputs found

    Error Modeling for Hierarchical Lossless Image Compression

    Get PDF
    (c) 1992 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We present a new method for error modeling applicable to the MLP algorithm for hierarchical lossless image compression. This method, based on a concept called the variability index, provides accurate models for pixel prediction errors without requiring explicit transmission of the models. We also use the vari- ability index to show that prediction errors do not always follow the Laplace distribution, as is commonly assumed; replacing the Laplace distribution with a more general symmetric exponential distribution further improves compression. We describe a new compression measurement called compression gain, and we give experimental results showing that the MLP method using the variability index technique for error modeling gives signi cantly more compression gain than other methods in the literature

    Fast Progressive Lossless Image Compression

    Get PDF
    Copyright 1994 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic electronic or print reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. http://dx.doi.org/10.1117/12.173910We present a method for progressive lossless compression of still grayscale images that combines the speed of our earlier FELICS method with the progressivity of our earlier MLP method We use MLP s pyramid based pixel sequence and image and error modeling and coding based on that of FELICS In addition we introduce a new pre x code with some advantages over the previously used Golomb and Rice codes Our new progressive method gives compression ratios and speeds similar to those of non progressive FELICS and those of JPEG lossless mode also a non progressive method The image model in Progressive FELICS is based on a simple function of four nearby pixels We select two of the four nearest known pixels using the two with the middle non extreme values Then we code the pixel s intensity relative to the selected pixels using single bits adjusted binary codes and simple pre x codes like Golomb codes Rice codes or the new family of pre x codes introduced here We estimate the coding parameter adaptively for each context the context being the absolute value of the di erence of the predicting pixels we adjust the adaptation statistics at the beginning of each level in the progressive pixel sequenc

    Parallel Lossless Image Compression Using Huffman and Arithmetic Coding

    Get PDF
    We show that high-resolution images can be encoded and decoded e ciently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Hu man coding or with a new variant of arithmetic coding called quasi-arithmetic coding. The coding step can be parallelized, even though the codes for di erent pixels are of di erent lengths; parallelization of the prediction and error modeling components is straightforward

    Design and Analysis of Fast Text Compression Based on Quasi-Arithmetic Coding

    Get PDF
    We give a detailed algorithm for fast text compression. Our algorithm, related to the PPM method, simpli es the modeling phase by eliminating the escape mechanism and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. We provide details of the use of quasi-arithmetic code tables, and analyze their compression performance. Our Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression

    Fast and Efficient Lossless Image Compression

    Get PDF
    (c) 1993 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We present a new method for lossless image compression that gives compression comparable to JPEG lossless mode with about ve times the speed. Our method, called FELICS, is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding we use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter we present and analyze a provably good method for estimating the single coding parameter

    Arithmetic Coding for Data Compression

    Get PDF
    (c) 1994 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Arithmetic coding provides an e ective mechanism for remov- ing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implemen- tation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding

    Analysis of Arithmetic Coding for Data Compression

    Get PDF
    Arithmetic coding, in conjunction with a suitable probabilistic model, can pro- vide nearly optimal data compression. In this article we analyze the e ect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmetic coding im- plementations to reduce time and storage requirements; it also introduces a recency e ect which can further a ect compression. Our main contribution is introducing the concept of weighted entropy and using it to characterize in an elegant way the e ect that periodic scaling has on the code length. We explain why and by how much scaling increases the code length for les with a ho- mogeneous distribution of symbols, and we characterize the reduction in code length due to scaling for les exhibiting locality of reference. We also give a rigorous proof that the coding e ects of rounding scaled weights, using integer arithmetic, and encoding end-of- le are negligible

    Hierarchical Bin Buffering: Online Local Moments for Dynamic External Memory Arrays

    Get PDF
    Local moments are used for local regression, to compute statistical measures such as sums, averages, and standard deviations, and to approximate probability distributions. We consider the case where the data source is a very large I/O array of size n and we want to compute the first N local moments, for some constant N. Without precomputation, this requires O(n) time. We develop a sequence of algorithms of increasing sophistication that use precomputation and additional buffer space to speed up queries. The simpler algorithms partition the I/O array into consecutive ranges called bins, and they are applicable not only to local-moment queries, but also to algebraic queries (MAX, AVERAGE, SUM, etc.). With N buffers of size sqrt{n}, time complexity drops to O(sqrt n). A more sophisticated approach uses hierarchical buffering and has a logarithmic time complexity (O(b log_b n)), when using N hierarchical buffers of size n/b. Using Overlapped Bin Buffering, we show that only a single buffer is needed, as with wavelet-based algorithms, but using much less storage. Applications exist in multidimensional and statistical databases over massive data sets, interactive image processing, and visualization

    The SBC-Tree: An Index for Run-Length Compressed Sequences

    Get PDF
    Run-Length-Encoding (RLE) is a data compression technique that is used in various applications, e.g., biological sequence databases. multimedia: and facsimile transmission. One of the main challenges is how to operate, e.g., indexing: searching, and retriexral: on the compressed data without decompressing it. In t.his paper, we present the String &tree for _Compressed sequences; termed the SBC-tree, for indexing and searching RLE-compressed sequences of arbitrary length. The SBC-tree is a two-level index structure based on the well-knoxvn String B-tree and a 3-sided range query structure. The SBC-tree supports substring as \\re11 as prefix m,atching, and range search operations over RLE-compressed sequences. The SBC-tree has an optimal external-memory space complexity of O(N/B) pages, where N is the total length of the compressed sequences, and B is the disk page size. The insertion and deletion of all suffixes of a compressed sequence of length m taltes O(m logB(N + m)) I/O operations. Substring match,ing, pre,fix matching, and range search execute in an optimal O(log, N + F) I/O operations, where Ip is the length of the compressed query pattern and T is the query output size. Re present also two variants of the SBC-tree: the SBC-tree that is based on an R-tree instead of the 3-sided structure: and the one-level SBC-tree that does not use a two-dimensional index. These variants do not have provable worstcase theoret.ica1 bounds for search operations, but perform well in practice. The SBC-tree index is realized inside PostgreSQL in t,he context of a biological protein database application. Performance results illustrate that using the SBC-tree to index RLE-compressed sequences achieves up to an order of magnitude reduction in storage, up to 30 % reduction in 110s for the insertion operations, and retains the optimal search performance achieved by the St,ring B-tree over the uncompressed sequences.!I c 0,

    A simple optimal randomized algorithm for sorting on the PDM

    Get PDF
    The Parallel Disks Model (PDM) has been proposed to alleviate the I/O bottleneck that arises in the processing of massive data sets. Sorting has been extensively studied on the PDM model due to the fundamental nature of the problem. Several randomized algorithms are known for sorting. Most of the prior algorithms suffer from undue complications in memory layouts, implementation, or lack of tight analysis. In this paper we present a simple randomized algorithm that sorts in optimal time with high probablity and has all the desirable features for practical implementation
    • …
    corecore