5 research outputs found

    An N-Square Approach for Reduced Complexity Non-Binary Encoding

    Get PDF
    There is always a need for the compression of data to facilitate its easy transmission and storage. Several lossy and lossless techniques have been developed in the past few decades. Lossless techniques allow compression without any loss of information. In this paper, we propose a new algorithm for lossless compression. Our experimental results show that the proposed algorithm performs compression in lesser iterations than the existing Non-Binary Huffman coding without affecting the average number of digits required to represent the symbols, thereby reducing the complexity involved during the compression process

    N-Square Approach For Lossless Image Compression And Decompression

    Get PDF
    There are several lossy and lossless coding techniques developed all through the last two decades. Although very high compression can be achieved with lossy compression techniques, they are deficient in obtaining the original image. While lossless compression technique recovers the image exactly. In applications related to medical imaging lossless techniques are required, as the loss of information is deplorable. The objective of image compression is to symbolize an image with a handful number of bits as possible while preserving the quality required for the given application. In this paper we are introducing a new lossless encoding and decoding technique which even better reduces the entropy there by reducing the average number of bits with the utility of Non Binary Huffman coding through the use of N-Square approach and fasten the process of searching for a codeword in a N-Square tree, we exploit the property of the encoded image pixels, and propose a memory efficient data structure to represent a decoding N-Square tree. Our extensive experimental results demonstrate that the proposed scheme is very competitive and this addresses the limitations of D value in the existing system by proposing a pattern called N-Square approach for it. The newly proposed algorithm provides a good means for lossless image compression and decompression

    Entropy and Certainty in Lossless Data Compression

    Get PDF
    Data compression is the art of using encoding techniques to represent data symbols using less storage space compared to the original data representation. The encoding process builds a relationship between the entropy of the data and the certainty of the system. The theoretical limits of this relationship are defined by the theory of entropy in information that was proposed by Claude Shannon. Lossless data compression is uniquely tied to entropy theory as the data and the system have a static definition. The static nature of the two requires a mechanism to reduce the entropy without the ability to alter either of these key components. This dissertation develops the Map of Certainty and Entropy (MaCE) in order to illustrate the entropy and certainty contained within an information system and uses this concept to generate the proposed methods for prefix-free, lossless compression of static data. The first method, Select Level Method (SLM), increases the efficiency of creating Shannon-Fano-Elias code in terms of CPU cycles. SLM is developed using a sideways view of the compression environment provided by MaCE. This view is also used for the second contribution, Sort Linear Method Nivellate (SLMN) which uses the concepts of SLM with the addition of midpoints and a fitting function to increase the compression efficiency of SLM to entropy values L(x) \u3c H(x) + 1. Finally, the third contribution, Jacobs, Ali, Kolibal Encoding (JAKE), extends SLM and SLMN to bases larger than binary to increase the compression even further while maintaining the same relative computation efficiency
    corecore