9 research outputs found

    Data Compression For Multimedia Computing

    Get PDF
    This is a library based study on data compression for multimedia computing. Multimedia information needs a large storage capacity as they contain vast amount of data. This would mean multimedia information would be out of reach of most computer users as their PCs would not be able to store the enormous amount of data accumulated on such programs. However, it is not necessary to keep these data in its original form as there are techniques that could compressed multimedia data to a more manageable level. Therefore, the main objective of this study is to provide information on the availability of compression techniques that would enable PC users the opportunity to use such programs. The review of related literature reveals that there are two basic compression techniques available - lossless and lossy. Under the lossless technique, the Huffman Coding, Arithmetic Coding and Lempel-Ziv Welch Coding are discussed. On the other hand, the Predictive, Frequency Oriented and Importance Oriented techniques are discussed under the lossy technique. Besides these two main techniques, Hybrid techniques such as the JPEG, MPEG and Px64 are also discussed. In order to bind the discussion between compression and storage media, a description of popular storage media such as magnetic disk storage and optical disc storage are also included. Although the data are of secondary source, the writer uses a formula derived from Howard and Vitter (1992) to measure compression efficiency. Based on the data collection and analysis it is found that different types of data (text, audio, video etc.)should be compressed using different techniques in order to obtain the ideal compression ratio and quality. Although the writer believes that the secondary data obtained is sufficient to show the best compression techniques for the different types of multimedia data, he also believes that real experiment using real data, software application and hardware would give better and more precise results

    Losslees compression of RGB color images

    Get PDF
    Although much work has been done toward developing lossless algorithms for compressing image data, most techniques reported have been for two-tone or gray-scale images. It is generally accepted that a color image can be easily encoded by using a gray-scale compression technique on each of the three accounts the substantial correlations that are present between color planes. Although several lossy compression schemes that exploit such correlations have been reported in the literature, we are not aware of any such techniques for lossless compression. Because of the difference in goals, the best way of exploiting redundancies for lossy and lossless compression can be, and usually are, very different. We propose and investigate a few lossless compression schemes for RGB color images. Both prediction schemes and error modeling schemes are presented that exploit inter-frame correlations. Implementation results on a test set of images yield significant improvements

    Band Ordering in Lossless Compression of Multispectral Images

    Get PDF
    In this paper, we consider a model of lossless image compression in which each band of a multispectral image is coded using a prediction function involving values from a previously coded band of the compression, and examine how the ordering of the bands affects the achievable compression. We present an efficient algorithm for computing the optimal band ordering for a multispectral image. This algorithm has time complexity O(n2) for an n-band image, while the naive algorithm takes time &#x03A9(n!). A slight variant of the optimal ordering problem that is motivated by some practical concerns is shown to be NP-hard, and hence, computationally infeasible, in all cases except for the most trivial possibility. In addition, we report on our experimental findings using the algorithms designed in this paper applied to real multispectral satellite data. The results show that the techniques described here hold great promise for application to real-world compression needs

    LOCMIC:LOw Complexity Multi-resolution Image Compression

    Get PDF
    Image compression is a well-established and extensively researched field. The huge interest in it has been aroused by the rapid enhancements introduced in imaging techniques and the various applications that use high-resolution images (e.g. medical, astronomical, Internet applications). The image compression algorithms should not only give state-of-art performance, they should also provide other features and functionalities such as progressive transmission. Often, a rough approximation (thumbnail) of an image is sufficient for the user to decide whether to continue the image transmission or to abort; which accordingly helps to reduce time and bandwidth. That in turn necessitated the development of multi-resolution image compression schemes. The existed multi-resolution schemes (e.g., Multi-Level Progressive method) have shown high computational efficiency, but with a lack of the compression performance, in general. In this thesis, a LOw Complexity Multi-resolution Image Compression (LOCMIC) based on the Hierarchical INTerpolation (HINT) framework is presented. Moreover, a novel integration of the Just Noticeable Distortion (JND) for perceptual coding with the HINT framework to achieve a visual-lossless multi-resolution scheme has been proposed. In addition, various prediction formulas, a context-based prediction correction model and a multi-level Golomb parameter adaption approach have been investigated. The proposed LOCMIC (the lossless and the visual lossless) has contributed to the compression performance. The lossless LOCMIC has achieved a 3% reduced bit rate over LOCO-I, about 1% over JPEG2000, 3% over SPIHT, and 2% over CALIC. The Perceptual LOCMIC has been better in terms of bit rate than near-lossless JPEG-LS (at NEAR=2) with about 4.7%. Moreover, the decorrelation efficiency of the LOCMIC in terms of entropy has shown an advance of 2.8%, 4.5% than the MED and the conventional HINT respectively

    Adaptive edge-based prediction for lossless image compression

    Get PDF
    Many lossless image compression methods have been suggested with established results hard to surpass. However there are some aspects that can be considered to improve the performance further. This research focuses on two-phase prediction-encoding method, separately studying each and suggesting new techniques.;In the prediction module, proposed Edge-Based-Predictor (EBP) and Least-Squares-Edge-Based-Predictor (LS-EBP) emphasizes on image edges and make predictions accordingly. EBP is a gradient based nonlinear adaptive predictor. EBP switches between prediction-rules based on few threshold parameters automatically determined by a pre-analysis procedure, which makes a first pass. The LS-EBP also uses these parameters, but optimizes the prediction for each pre-analysis assigned edge location, thus applying least-square approach only at the edge points.;For encoding module: a novel Burrows Wheeler Transform (BWT) inspired method is suggested, which performs better than applying the BWT directly on the images. We also present a context-based adaptive error modeling and encoding scheme. When coupled with the above-mentioned prediction schemes, the result is the best-known compression performance in the genre of compression schemes with same time and space complexity

    Digital rights management (DRM) - watermark encoding scheme for JPEG images

    Get PDF
    The aim of this dissertation is to develop a new algorithm to embed a watermark in JPEG compressed images, using encoding methods. This encompasses the embedding of proprietary information, such as identity and authentication bitstrings, into the compressed material. This watermark encoding scheme involves combining entropy coding with homophonic coding, in order to embed a watermark in a JPEG image. Arithmetic coding was used as the entropy encoder for this scheme. It is often desired to obtain a robust digital watermarking method that does not distort the digital image, even if this implies that the image is slightly expanded in size before final compression. In this dissertation an algorithm that combines homophonic and arithmetic coding for JPEG images was developed and implemented in software. A detailed analysis of this algorithm is given and the compression (in number of bits) obtained when using the newly developed algorithm (homophonic and arithmetic coding). This research shows that homophonic coding can be used to embed a watermark in a JPEG image by using the watermark information for the selection of the homophones. The proposed algorithm can thus be viewed as a ‘key-less’ encryption technique, where an external bitstring is used as a ‘key’ and is embedded intrinsically into the message stream. The algorithm has achieved to create JPEG images with minimal distortion, with Peak Signal to Noise Ratios (PSNR) of above 35dB. The resulting increase in the entropy of the file is within the expected 2 bits per symbol. This research endeavor consequently provides a unique watermarking technique for images compressed using the JPEG standard.Dissertation (MEng)--University of Pretoria, 2008.Electrical, Electronic and Computer Engineeringunrestricte

    1994 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on September 26-27, 1994, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival and retrieval of large quantities of data in future Earth and space science missions. It consisted of eleven presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center
    corecore