84 research outputs found

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    A zerotree wavelet video coder

    Full text link

    Embedded filter bank-based algorithm for ECG compression

    Get PDF
    In this work, two ECG compression schemes are presented using two types of filter banks to decompose the incoming signal: wavelet packets (WP) and nearly-perfect reconstruction cosine modulated filter banks. The conventional embedded zerotree wavelet (EZW) algorithm takes advantage of the hierarchical relationship among subband coefficients of the pyramidal wavelet decomposition. Nevertheless, it performs worse when used with WP as the hierarchy becomes more complex. In order to address this problem, we propose a new technique that considers no relationship among coefficients, and is therefore suitable for use with WP. Furthermore, this new approximation makes it possible to apply the quantization method toM-channel maximally decimated filter banks. In this fashion, the proposed algorithm provides two efficient and effective ECG compressors that show better ECG compression performance than the conventional EZW algorithm

    DCT Video Compositing with Embedded Zerotree Coding for Multi-Point Video Conferencing

    Get PDF
    In this thesis, DCT domain video compositing with embedded zerotree coding for multi-point video conferencing is considered. In a typical video compositing system, video sequences coming from different sources are composited into one video stream and sent using a single channel to the receiver points. There are mainly three stages of video compositing: decoding of incoming video streams, decimation of video frames, andencoding of the composited video. Conventional spatial domain video compositing requires transformations between the DCT and the spatial domains increasing the complexity of computations. The advantage of the DCT domain video compositing is that the decoding, decimation and encoding remain fully in the DCT domain resulting in faster processing time and better quality of the composited videos. The composited videos are encoded via a DCT based embedded zerotree coder which was originally developed for wavelet coding. An adaptive arithmetic coder is used to encode the symbols obtained from the DCT based zerotree codingresulting in embedded bit stream. By using the embedded zerotree coder the quality of the composited videos is improved when compared to a conventional encoder. An advanced versionof zerotree coder is also used to increase the performance of the compositing system. Another improvement is due to the use of local cosine transform to decrease the blocking effect at low bit rates. We also apply the proposed DCT decimation/interpolation for single stream video coding achieving better quality than regular encoding process at low bit rates. The bit rate control problem is easily solved by taking the advantage the embedded property of zerotree coding since the coding control parameter is the bit rate itself. We also achieve the optimum bit rate allocation among the composited frames in a GOP without using subframe layer bit rate allocation, since zerotree coding uses successive approximation quantization allowing DCT coefficients to be encoded in descending significance order

    Exploiting parallelism within multidimensional multirate digital signal processing systems

    Get PDF
    The intense requirements for high processing rates of multidimensional Digital Signal Processing systems in practical applications justify the Application Specific Integrated Circuits designs and parallel processing implementations. In this dissertation, we propose novel theories, methodologies and architectures in designing high-performance VLSI implementations for general multidimensional multirate Digital Signal Processing systems by exploiting the parallelism within those applications. To systematically exploit the parallelism within the multidimensional multirate DSP algorithms, we develop novel transformations including (1) nonlinear I/O data space transforms, (2) intercalation transforms, and (3) multidimensional multirate unfolding transforms. These transformations are applied to the algorithms leading to systematic methodologies in high-performance architectural designs. With the novel design methodologies, we develop several architectures with parallel and distributed processing features for implementing multidimensional multirate applications. Experimental results have shown that those architectures are much more efficient in terms of execution time and/or hardware cost compared with existing hardware implementations

    High ratio wavelet video compression through real-time rate-distortion estimation.

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.The success of the wavelet transform in the compression of still images has prompted an expanding effort to exercise this transform in the compression of video. Most existing video compression methods incorporate techniques from still image compression, such techniques being abundant, well defined and successful. This dissertation commences with a thorough review and comparison of wavelet still image compression techniques. Thereafter an examination of wavelet video compression techniques is presented. Currently, the most effective video compression system is the DCT based framework, thus a comparison between these and the wavelet techniques is also given. Based on this review, this dissertation then presents a new, low-complexity, wavelet video compression scheme. Noting from a complexity study that the generation of temporally decorrelated, residual frames represents a significant computational burden, this scheme uses the simplest such technique; difference frames. In the case of local motion, these difference frames exhibit strong spatial clustering of significant coefficients. A simple spatial syntax is created by splitting the difference frame into tiles. Advantage of the spatial clustering may then be taken by adaptive bit allocation between the tiles. This is the central idea of the method. In order to minimize the total distortion of the frame, the scheme uses the new p-domain rate-distortion estimation scheme with global numerical optimization to predict the optimal distribution of bits between tiles. Thereafter each tile is independently wavelet transformed and compressed using the SPIHT technique. Throughout the design process computational efficiency was the design imperative, thus leading to a real-time, software only, video compression scheme. The scheme is finally compared to both the current video compression standards and the leading wavelet schemes from the literature in terms of computational complexity visual quality. It is found that for local motion scenes the proposed algorithm executes approximately an order of magnitude faster than these methods, and presents output of similar quality. This algorithm is found to be suitable for implementation in mobile and embedded devices due to its moderate memory and computational requirements

    Context-based compression algorithms for text and image data.

    Get PDF
    Wong Ling.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 80-85).ABSTRACT --- p.1Chapter 1. --- INTRODUCTION --- p.2Chapter 1.1 --- motivation --- p.4Chapter 1.2 --- Original Contributions --- p.5Chapter 1.3 --- thesis Structure --- p.5Chapter 2. --- BACKGROUND --- p.7Chapter 2.1 --- information theory --- p.7Chapter 2.2 --- early compression --- p.8Chapter 2.2.1 --- Some Source Codes --- p.10Chapter 2.2.1.1 --- Huffman Code --- p.10Chapter 2.2.1.2 --- Tutstall Code --- p.10Chapter 2.2.1.3 --- Arithmetic Code --- p.11Chapter 2.3 --- modern techniques for compression --- p.14Chapter 2.3.1 --- Statistical Modeling --- p.14Chapter 2.3.1.1 --- Context Modeling --- p.15Chapter 2.3.1.2 --- State Based Modeling --- p.17Chapter 2.3.2 --- Dictionary Based Compression --- p.17Chapter 2.3.2.1 --- LZ-compression --- p.19Chapter 2.3.3 --- Other Compression Techniques --- p.20Chapter 2.3.3.1 --- Block Sorting --- p.20Chapter 2.3.3.2 --- Context Tree Weighting --- p.21Chapter 3. --- SYMBOL REMAPPING --- p.22Chapter 3. 1 --- reviews on Block Sorting --- p.22Chapter 3.1.1 --- Forward Transformation --- p.23Chapter 3.1.2 --- Inverse Transformation --- p.24Chapter 3.2 --- Ordering Method --- p.25Chapter 3.3 --- discussions --- p.27Chapter 4. --- CONTENT PREDICTION --- p.29Chapter 4.1 --- Prediction and Ranking Schemes --- p.29Chapter 4.1.1 --- Content Predictor --- p.29Chapter 4.1.2 --- Ranking Techn ique --- p.30Chapter 4.2 --- Reviews on Context Sorting --- p.31Chapter 4.2.1 --- Context Sorting basis --- p.31Chapter 4.3 --- General Framework of Content Prediction --- p.31Chapter 4.3.1 --- A Baseline Version --- p.32Chapter 4.3.2 --- Context Length Merge --- p.34Chapter 4.4 --- Discussions --- p.36Chapter 5. --- BOUNDED-LENGTH BLOCK SORTING --- p.38Chapter 5.1 --- block sorting with bounded context length --- p.38Chapter 5.1.1 --- Forward Transformation --- p.38Chapter 5.1.2 --- Reverse Transformation --- p.39Chapter 5.2 --- Locally Adaptive Entropy Coding --- p.43Chapter 5.3 --- discussion --- p.45Chapter 6. --- CONTEXT CODING FOR IMAGE DATA --- p.47Chapter 6.1 --- Digital Images --- p.47Chapter 6.1.1 --- Redundancy --- p.48Chapter 6.2 --- model of a compression system --- p.49Chapter 6.2.1 --- Representation --- p.49Chapter 6.2.2 --- Quantization --- p.50Chapter 6.2.3 --- Lossless coding --- p.51Chapter 6.3 --- The Embedded Zerotree Wavelet Coding --- p.51Chapter 6.3.1 --- Simple Zerotree-like Implementation --- p.53Chapter 6.3.2 --- Analysis of Zerotree Coding --- p.54Chapter 6.3.2.1 --- Linkage between Coefficients --- p.55Chapter 6.3.2.2 --- Design of Uniform Threshold Quantizer with Dead Zone --- p.58Chapter 6.4 --- Extensions on Wavelet Coding --- p.59Chapter 6.4.1 --- Coefficients Scanning --- p.60Chapter 6.5 --- Discussions --- p.61Chapter 7. --- CONCLUSIONS --- p.63Chapter 7.1 --- Future Research --- p.64APPENDIX --- p.65Chapter A --- Lossless Compression Results --- p.65Chapter B --- Image Compression Standards --- p.72Chapter C --- human Visual System Characteristics --- p.75Chapter D --- Lossy Compression Results --- p.76COMPRESSION GALLERY --- p.77Context-based Wavelet Coding --- p.75RD-OPT-based jpeg Compression --- p.76SPIHT Wavelet Compression --- p.77REFERENCES --- p.8

    Regularity scalable image coding based on wavelet singularity detection

    Get PDF
    In this paper, we propose an adaptive algorithm for scalable wavelet image coding, which is based on the general feature, the regularity, of images. In pattern recognition or computer vision, regularity of images is estimated from the oriented wavelet coefficients and quantified by the Lipschitz exponents. To estimate the Lipschitz exponents, evaluating the interscale evolution of the wavelet transform modulus sum (WTMS) over the directional cone of influence was proven to be a better approach than tracing the wavelet transform modulus maxima (WTMM). This is because the irregular sampling nature of the WTMM complicates the reconstruction process. Moreover, examples were found to show that the WTMM representation cannot uniquely characterize a signal. It implies that the reconstruction of signal from its WTMM may not be consistently stable. Furthermore, the WTMM approach requires much more computational effort. Therefore, we use the WTMS approach to estimate the regularity of images from the separable wavelet transformed coefficients. Since we do not concern about the localization issue, we allow the decimation to occur when we evaluate the interscale evolution. After the regularity is estimated, this information is utilized in our proposed adaptive regularity scalable wavelet image coding algorithm. This algorithm can be simply embedded into any wavelet image coders, so it is compatible with the existing scalable coding techniques, such as the resolution scalable and signal-to-noise ratio (SNR) scalable coding techniques, without changing the bitstream format, but provides more scalable levels with higher peak signal-to-noise ratios (PSNRs) and lower bit rates. In comparison to the other feature-based wavelet scalable coding algorithms, the proposed algorithm outperforms them in terms of visual perception, computational complexity and coding efficienc

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems
    corecore