521 research outputs found

    Attractor image coding with low blocking effects.

    Get PDF
    by Ho, Hau Lai.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 97-103).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Overview of Attractor Image Coding --- p.2Chapter 1.2 --- Scope of Thesis --- p.3Chapter 2 --- Fundamentals of Attractor Coding --- p.6Chapter 2.1 --- Notations --- p.6Chapter 2.2 --- Mathematical Preliminaries --- p.7Chapter 2.3 --- Partitioned Iterated Function Systems --- p.10Chapter 2.3.1 --- Mathematical Formulation of the PIFS --- p.12Chapter 2.4 --- Attractor Coding using the PIFS --- p.16Chapter 2.4.1 --- Quadtree Partitioning --- p.18Chapter 2.4.2 --- Inclusion of an Orthogonalization Operator --- p.19Chapter 2.5 --- Coding Examples --- p.21Chapter 2.5.1 --- Evaluation Criterion --- p.22Chapter 2.5.2 --- Experimental Settings --- p.22Chapter 2.5.3 --- Results and Discussions --- p.23Chapter 2.6 --- Summary --- p.25Chapter 3 --- Attractor Coding with Adjacent Block Parameter Estimations --- p.27Chapter 3.1 --- δ-Minimum Edge Difference --- p.29Chapter 3.1.1 --- Definition --- p.29Chapter 3.1.2 --- Theoretical Analysis --- p.31Chapter 3.2 --- Adjacent Block Parameter Estimation Scheme --- p.33Chapter 3.2.1 --- Joint Optimization --- p.34Chapter 3.2.2 --- Predictive Coding --- p.36Chapter 3.3 --- Algorithmic Descriptions of the Proposed Scheme --- p.39Chapter 3.4 --- Experimental Results --- p.40Chapter 3.5 --- Summary --- p.50Chapter 4 --- Attractor Coding using Lapped Partitioned Iterated Function Sys- tems --- p.51Chapter 4.1 --- Lapped Partitioned Iterated Function Systems --- p.53Chapter 4.1.1 --- Weighting Operator --- p.54Chapter 4.1.2 --- Mathematical Formulation of the LPIFS --- p.57Chapter 4.2 --- Attractor Coding using the LPIFS --- p.62Chapter 4.2.1 --- Choice of Weighting Operator --- p.64Chapter 4.2.2 --- Range Block Preprocessing --- p.69Chapter 4.2.3 --- Decoder Convergence Analysis --- p.73Chapter 4.3 --- Local Domain Block Searching --- p.74Chapter 4.3.1 --- Theoretical Foundation --- p.75Chapter 4.3.2 --- Local Block Searching Algorithm --- p.77Chapter 4.4 --- Experimental Results --- p.79Chapter 4.5 --- Summary --- p.90Chapter 5 --- Conclusion --- p.91Chapter 5.1 --- Original Contributions --- p.91Chapter 5.2 --- Subjects for Future Research --- p.92Chapter A --- Fundamental Definitions --- p.94Chapter B --- Appendix B --- p.96Bibliography --- p.9

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    Hardware Implementation of a Novel Image Compression Algorithm

    Get PDF
    Image-related communications are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Image compression is important for effective storage and transmission of images. Many techniques have been developed in the past, including transform coding, vector quantization and neural networks. In this thesis, a novel adaptive compression technique is introduced based on adaptive rather than fixed transforms for image compression. The proposed technique is similar to Neural Network (NN)-based image compression and its superiority over other techniques is presented It is shown that the proposed algorithm results in higher image quality for a given compression ratio than existing Neural Network algorithms and that the training of this algorithm is significantly faster than the NN based algorithms. This is also compared to the JPEG in terms of Peak Signal to Noise Ratio (PSNR) for a given compression ratio and computational complexity. Advantages of this idea over JPEG are also presented in this thesis

    Position Based Coding Scheme and Huffman Coding in JPEG2000: An Experimental analysis

    Get PDF
    Abstract-The paper compares the novel method of position based coding scheme introduced recently by the authors with Huffman coding results. The results show that Position Based Coding Scheme (PBCS) is superior in terms of image compression ratio and PSNR. In PBCS, by identifying the unique elements and by reducing redundancies the coding has been performed. The results of JPEG2000 image compression with Huffman coding and the JPEG2000 based on PBCS are then compared. The results show that the PBCS has better compression ratio with higher PSNR and better image quality. The study, which can be considered as a logical extension of the image transformation matrix, applies statistical tools to achieve the novel coding scheme as a direct extension to wavelet based image compression. The coding scheme can highly economise the bandwidth without compromising on picture quality; invariant to the existing compression standards and lossy as well as lossless compressions which offers possibility for wide ranging applications

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Image compression techniques using vector quantization

    Get PDF
    • …
    corecore