77,589 research outputs found

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Get PDF
    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images

    Weighted universal transform coding: universal image compression with the Karhunen-Loève transform

    Get PDF
    We introduce a two-stage universal transform code for image compression. The code combines Karhunen-Loève transform coding with weighted universal bit allocation (WUBA) in a two-stage algorithm analogous to the algorithm for weighted universal vector quantization (WUVQ). The encoder uses a collection of transform/bit allocation pairs rather than a single transform/bit allocation pair (as in JPEG) or a single transform with a variety of bit allocations (as in WUBA). We describe both an encoding algorithm for achieving optimal compression using a collection of transform/bit allocation pairs and a technique for designing locally optimal collections of transform/bit allocation pairs. We demonstrate the performance using the mean squared error distortion measure. On a sequence of combined text and gray scale images, the algorithm achieves up to a 2 dB improvement over a JPEG style coder using the discrete cosine transform (DCT) and an optimal collection of bit allocations, up to a 3 dB improvement over a JPEG style coder using the DCT and a single (optimal) bit allocation, up to 6 dB over an entropy constrained WUVQ with first- and second-stage vector dimensions equal to 16 and 4 respectively, and up to a 10 dB improvement over an entropy constrained vector quantizer (ECVQ) with a vector dimension of 4

    Improved image decompression for reduced transform coding artifacts

    Get PDF
    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    Parental finite state vector quantizer and vector wavelet transform-linear predictive coding.

    Get PDF
    by Lam Chi Wah.Thesis submitted in: December 1997.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 89-91).Abstract also in Chinese.Chapter Chapter 1 --- Introduction to Data Compression and Image Coding --- p.1Chapter 1.1 --- Introduction --- p.1Chapter 1.2 --- Fundamental Principle of Data Compression --- p.2Chapter 1.3 --- Some Data Compression Algorithms --- p.3Chapter 1.4 --- Image Coding Overview --- p.4Chapter 1.5 --- Image Transformation --- p.5Chapter 1.6 --- Quantization --- p.7Chapter 1.7 --- Lossless Coding --- p.8Chapter Chapter 2 --- Subband Coding and Wavelet Transform --- p.9Chapter 2.1 --- Subband Coding Principle --- p.9Chapter 2.2 --- Perfect Reconstruction --- p.11Chapter 2.3 --- Multi-Channel System --- p.13Chapter 2.4 --- Discrete Wavelet Transform --- p.13Chapter Chapter 3 --- Vector Quantization (VQ) --- p.16Chapter 3.1 --- Introduction --- p.16Chapter 3.2 --- Basic Vector Quantization Procedure --- p.17Chapter 3.3 --- Codebook Searching and the LBG Algorithm --- p.18Chapter 3.3.1 --- Codebook --- p.18Chapter 3.3.2 --- LBG Algorithm --- p.19Chapter 3.4 --- Problem of VQ and Variations of VQ --- p.21Chapter 3.4.1 --- Classified VQ (CVQ) --- p.22Chapter 3.4.2 --- Finite State VQ (FSVQ) --- p.23Chapter 3.5 --- Vector Quantization on Wavelet Coefficients --- p.24Chapter Chapter 4 --- Vector Wavelet Transform-Linear Predictor Coding --- p.26Chapter 4.1 --- Image Coding Using Wavelet Transform with Vector Quantization --- p.26Chapter 4.1.1 --- Future Standard --- p.26Chapter 4.1.2 --- Drawback of DCT --- p.27Chapter 4.1.3 --- "Wavelet Coding and VQ, the Future Trend" --- p.28Chapter 4.2 --- Mismatch between Scalar Transformation and VQ --- p.29Chapter 4.3 --- Vector Wavelet Transform (VWT) --- p.30Chapter 4.4 --- Example of Vector Wavelet Transform --- p.34Chapter 4.5 --- Vector Wavelet Transform - Linear Predictive Coding (VWT-LPC) --- p.36Chapter 4.6 --- An Example of VWT-LPC --- p.38Chapter Chapter 5 --- Vector Quantizaton with Inter-band Bit Allocation (IBBA) --- p.40Chapter 5.1 --- Bit Allocation Problem --- p.40Chapter 5.2 --- Bit Allocation for Wavelet Subband Vector Quantizer --- p.42Chapter 5.2.1 --- Multiple Codebooks --- p.42Chapter 5.2.2 --- Inter-band Bit Allocation (IBBA) --- p.42Chapter Chapter 6 --- Parental Finite State Vector Quantizers (PFSVQ) --- p.45Chapter 6.1 --- Introduction --- p.45Chapter 6.2 --- Parent-Child Relationship Between Subbands --- p.46Chapter 6.3 --- Wavelet Subband Vector Structures for VQ --- p.48Chapter 6.3.1 --- VQ on Separate Bands --- p.48Chapter 6.3.2 --- InterBand Information for Intraband Vectors --- p.49Chapter 6.3.3 --- Cross band Vector Methods --- p.50Chapter 6.4 --- Parental Finite State Vector Quantization Algorithms --- p.52Chapter 6.4.1 --- Scheme I: Parental Finite State VQ with Parent Index Equals Child Class Number --- p.52Chapter 6.4.2 --- Scheme II: Parental Finite State VQ with Parent Index Larger than Child Class Number --- p.55Chapter Chapter 7 --- Simulation Result --- p.58Chapter 7.1 --- Introduction --- p.58Chapter 7.2 --- Simulation Result of Vector Wavelet Transform (VWT) --- p.59Chapter 7.3 --- Simulation Result of Vector Wavelet Transform - Linear Predictive Coding (VWT-LPC) --- p.61Chapter 7.3.1 --- First Test --- p.61Chapter 7.3.2 --- Second Test --- p.61Chapter 7.3.3 --- Third Test --- p.61Chapter 7.4 --- Simulation Result of Vector Quantization Using Inter-band Bit Allocation (IBBA) --- p.62Chapter 7.5 --- Simulation Result of Parental Finite State Vector Quantizers (PFSVQ) --- p.63Chapter Chapter 8 --- Conclusion --- p.86REFERENCE --- p.8

    Performance Evaluation of Hybrid Coding of Images Using Wavelet Transform and Predictive Coding

    Get PDF
    Image compression techniques are necessary for the storage of huge amounts of digital images using reasonable amounts of space, and for their transmission with limited bandwidth. Several techniques such as predictive coding, transform coding, subband coding, wavelet coding, and vector quantization have been used in image coding. While each technique has some advantages, most practical systems use hybrid techniques which incorporate more than one scheme. They combine the advantages of the individual schemes and enhance the coding effectiveness. This paper proposes and evaluates a hybrid coding scheme for images using wavelet transforms and predictive coding. The performance evaluation is done using a variety of different parameters such as kinds of wavelets, decomposition levels, types of quantizers, predictor coefficients, and quantization levels. The results of evaluation are presented

    An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    Get PDF
    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices

    Rate-distortion adaptive vector quantization for wavelet imagecoding

    Get PDF
    We propose a wavelet image coding scheme using rate-distortion adaptive tree-structured residual vector quantization. Wavelet transform coefficient coding is based on the pyramid hierarchy (zero-tree), but rather than determining the zero-tree relation from the coarsest subband to the finest by hard thresholding, the prediction in our scheme is achieved by rate-distortion optimization with adaptive vector quantization on the wavelet coefficients from the finest subband to the coarsest. The proposed method involves only integer operations and can be implemented with very low computational complexity. The preliminary experiments have shown some encouraging results: a PSNR of 30.93 dB is obtained at 0.174 bpp on the test image LENA (512×512
    corecore