327 research outputs found

    Applications of wavelet-based compression to multidimensional Earth science data

    Get PDF
    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program

    Efficient Image Coding and Transmission in Deep Space Communication

    Full text link
    The usefulness of modern digital communication comes from ensuring the data from a source arrives to its destination quickly and correctly. To meet these demands, communication protocols employ data compression and error detection/correction to ensure compactness and accuracy of the data, especially for critical scientific data which requires the use of lossless compression. For example, in deep space communication, information received from satellites to ground stations on Earth come in huge volumes captured with high precision and resolution by space mission instruments, such as Hubble Space Telescope (HST). On-board implementation of communication protocols poses numerous constraints and demands on the high performance given the criticality of data and a high cost of a space mission, including data values. The objectives of this study are to determine which data compression techniques yields the a) minimum data volumes, b) most error resilience, and c) utilize the least amount and power of hardware resources. For this study, a Field Programmable Gate Array (FPGA) will serve as the main component for building the circuitry for each source coding technique. Furthermore, errors are induced based on studies of reported errors rates in deep space communication channels to test for error resilience. Finally, the calculation of resource utilization of the source encoder determines the power and computational usage. Based on the analysis of the error resilience and the characteristics of errors, the requirements to the channel coding are formulated

    Lossless Compression Performance of a Simple Counter-Based Entropy Coder

    Get PDF
    This paper describes the performance of a simple counter based entropy coder, as compared to other entropy coders, especially Huffman coder. Lossless data compression, such as Huffman coder and arithmetic coder, are designed to perform well over a wide range of data entropy. As a result, the coders require significant computational resources that could be the bottleneck of a compression implementation performance. In contrast, counter-based coders are designed to be optimal on a limited entropy range only. This paper shows the encoding and decoding process of counter-based coder can be simple and fast, very suitable for hardware and software implementations. It also reports that the performance of the designed coder is comparable to that of a much more complex Huffman coder

    A digital signature and watermarking based authentication system for JPEG2000 images

    Get PDF
    In this thesis, digital signature based authentication system was introduced, which is able to protect JPEG2000 images in different flavors, including fragile authentication and semi-fragile authentication. The fragile authentication is to protect the image at code-stream level, and the semi-fragile is to protect the image at the content level. The semi-fragile can be further classified into lossy and lossless authentication. With lossless authentication, the original image can be recovered after verification. The lossless authentication and the new image compression standard, JPEG2000 is mainly discussed in this thesis

    Rate scalable image compression in the wavelet domain

    Get PDF
    This thesis explores image compression in the wavelet transform domain. This the- sis considers progressive compression based on bit plane coding. The rst part of the thesis investigates the scalar quantisation technique for multidimensional images such as colour and multispectral image. Embedded coders such as SPIHT and SPECK are known to be very simple and e cient algorithms for compression in the wavelet do- main. However, these algorithms require the use of lists to keep track of partitioning processes, and such lists involve high memory requirement during the encoding process. A listless approach has been proposed for multispectral image compression in order to reduce the working memory required. The earlier listless coders are extended into three dimensional coder so that redundancy in the spectral domain can be exploited. Listless implementation requires a xed memory of 4 bits per pixel to represent the state of each transformed coe cient. The state is updated during coding based on test of sig- ni cance. Spectral redundancies are exploited to improve the performance of the coder by modifying its scanning rules and the initial marker/state. For colour images, this is done by conducting a joint the signi cant test for the chrominance planes. In this way, the similarities between the chrominance planes can be exploited during the cod- ing process. Fixed memory listless methods that exploit spectral redundancies enable e cient coding while maintaining rate scalability and progressive transmission. The second part of the thesis addresses image compression using directional filters in the wavelet domain. A directional lter is expected to improve the retention of edge and curve information during compression. Current implementations of hybrid wavelet and directional (HWD) lters improve the contour representation of compressed images, but su er from the pseudo-Gibbs phenomenon in the smooth regions of the images. A di erent approach to directional lters in the wavelet transforms is proposed to remove such artifacts while maintaining the ability to preserve contours and texture. Imple- mentation with grayscale images shows improvements in terms of distortion rates and the structural similarity, especially in images with contours. The proposed transform manages to preserve the directional capability without pseudo-Gibbs artifacts and at the same time reduces the complexity of wavelet transform with directional lter. Fur-ther investigation to colour images shows the transform able to preserve texture and curve.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A Hardware Architecture of a Counter-Based Entropy Coder

    Full text link
    This paper describes a hardware architectural design of a real-time counter based entropy coder at a register transfer level (RTL) computing model. The architecture is based on a lossless compression algorithm called Rice coding, which is optimal for an entropy range of bits per sample. The architecture incorporates a word-splitting scheme to extend the entropy coverage into a range of bits per sample. We have designed a data structure in a form of independent code blocks, allowing more robust compressed bitstream. The design focuses on an RTL computing model and architecture, utilizing 8-bit buffers, adders, registers, loader-shifters, select-logics, down-counters, up-counters, and multiplexers. We have validated the architecture (both the encoder and the decoder) in a coprocessor for 8 bits/sample data on an FPGA Xilinx XC4005, utilizing 61% of F&G-CLBs, 34% H-CLBs, 32% FF-CLBs, and 68% IO resources. On this FPGA implementation, the encoder and decoder can achieve 1.74 Mbits/s and 2.91 Mbits/s throughputs, respectively. The architecture allows pipelining, resulting in potentially maximum encoding throughput of 200 Mbit/s on typical real-time TTL implementations. In addition, it uses a minimum number of register elements. As a result, this architecture can result in low cost, low energy consumption and reduced silicon area realizations

    Distributed single source coding with side information

    Full text link

    Image compression using discrete cosine transform and wavelet transform and performance comparison

    Get PDF
    Image compression deals with reducing the size of image which is performed with the help of transforms. In this project we have taken the Input image and applied wavelet techniques for image compression and have compared the result with the popular DCT image compression. WT provided better result as far as properties like RMS error, image intensity and execution time is concerned. Now a days wavelet theory based technique has emerged in different signal and image processing application including speech, image processing and computer vision. In particular Wavelet Transform is of interest for the analysis of non-stationary signals. In the WT at high frequencies short windows and at low frequencies long windows are used. Since discrete wavelet is essentially sub band–coding system, sub band coders have been quit successful in speech and image compression. It is clear that DWT has potential application in compression problem

    Adaptive polyphase subband decomposition structures for image compression

    Get PDF
    Cataloged from PDF version of article.Subband decomposition techniques have been extensively used for data coding and analysis. In most filter banks, the goal is to obtain subsampled signals corresponding to different spectral regions of the original data. However, this approach leads to various artifacts in images having spatially varying characteristics, such as images containing text, subtitles, or sharp edges. In this paper, adaptive filter banks with perfect reconstruction property are presented for such images. The filters of the decomposition structure which can be either linear or nonlinear vary according to the nature of the signal. This leads to improved image compression ratios. Simulation examples are presented
    corecore