224 research outputs found

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    Wavelet and Fourier bases on Fractals

    Get PDF
    In this thesis we first develop a geometric framework for spectral pairs and for orthonormal families of complex exponential functions in L2-spaces with respect to a given Borel probability measure that is compactly supported. Secondly, we develop wavelet bases on L2-spaces based on limit sets of different iteration systems. In the framework of spectral pairs we consider families of exponential functions with a countable index set G which difference set is equal to all integers, and we determine the L2-spaces in which these functions are orthonormal or constitute a basis. We also consider invariant measures on Cantor sets and study for which measures we have a family of exponential functions that is an orthonormal basis for the L2-space with respect to this measure. For the case of Cantor sets the families of exponential functions are obtained via Hadamard matrices. For the study of wavelet bases, we set up a multiresolution analysis on fractal sets derived from limit sets of Markov Interval Maps. For this we consider the translation by integers of a non-atomic measure supported on the limit set of such a system and give a thorough investigation of the space of square integrable functions with respect to this measure. We define an abstract multiresolution analysis, prove the existence of mother wavelets and then apply these abstract results to Markov Interval Maps. Even though, in our setting, the corresponding scaling operators are in general not unitary we are able to give a complete description of the multiresolution analysis in terms of multiwavelets. We also set up a multiresolution analysis for enlarged fractals in one and two dimensions, which are sets arising from fractals that are generated by iterated function systems, so that the enlarged fractals are dense in the lin or plane, respectively. The measure supported on the fractal is also extended to a measure on the enlarged fractal. We then construct a wavelet basis via multiresolution analysis on this L2-space with respect to the measure having the enlarged fractal as the support, with the characteristic function of the original fractal as the father wavelet which gives us via the multiresolution analysis the wavelet basis for the L2-space. In this construction we have two unitary operators. Finally, we also apply the wavelet bases on enlarged fractals in two dimensions to image compression

    Méthodes hybrides pour la compression d'image

    Get PDF
    Abstract : The storage and transmission of images is the basis of digital electronic communication. In order to communicate a maximum amount of information in a given period of time, one needs to look for efficient ways to represent the information communicated. Designing optimal representations is the subject of data compression. In this work, the compression methods consist of two steps in general, which are encoding and decoding. During encoding, one expresses the image by less data than the original and stores the data information; during decoding, one decodes the compressed data to show the decompressed image. In Chapter 1, we review some basic compression methods which are important in understanding the concepts of encoding and information theory as tools to build compression models and measure their efficiency. Further on, we focus on transform methods for compression, particularly we discuss in details Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). We also analyse the hybrid method which combines DCT and DWT together to compress image data. For the sake of comparison, we discuss another total different method which is fractal image compression that compresses image data by taking advantage of self-similarity of images. We propose the hybrid method of fractal image compression and DCT based on their characteristic. Several experimental results are provided to show the outcome of the comparison between the discussed methods. This allows us to conclude that the hybrid method performs more efficiently and offers a relatively good quality of compressed image than some particular methods, but also there is some improvement can be made in the future.Le stockage et la transmission d'images sont à la base de la communication électronique numérique. Afin de communiquer un maximum d'informations dans un laps de temps donné, il faut rechercher des moyens efficaces de représenter les informations communiquées. L'objectif de base de la compression de données est la conception d'algorithmes qui permettent des représentations optimales des données. Dans ce travail, les méthodes de compression consistent en deux étapes en général, qui sont l'encodage et le décodage. Lors du codage, on exprime l'image par moins de données que l'image originale et stocke les informations obtenues; lors du décodage, on décode les données compressées pour montrer l'image décompressée. Dans le chapitre 1, nous passons en revue quelques méthodes de compression de base qui sont importantes pour comprendre les concepts d'encodage et de théorie de l'information en tant qu'outils pour construire des modèles de compression et mesurer leur efficacité. Plus loin, nous nous concentrons sur les méthodes de transformation pour la compression, en particulier nous discutons en détail des méthodes de transformée en cosinus discrète (DCT) et Transformée en ondelettes discrète (DWT). Nous analysons également la méthode hybride qui combine DCT et DWT pour compresser les données d'image. À des fins de comparaison, nous discutons d'une autre méthode totalement différente qui est la compression d'image fractale qui comprime les données d'image en tirant partie de l'autosimilarité des images. Nous proposons la méthode hybride de compression d'image fractale et DCT en fonction de leurs caractéristiques. Plusieurs résultats expérimentaux sont fournis pour montrer le résultat de la comparaison entre les méthodes discutées. Cela nous permet de conclure que la méthode hybride fonctionne plus efficacement et offre une qualité d'image compressée relativement meilleure que certaines méthodes, mais il y a aussi des améliorations qui peuvent être apportées à l'avenir

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    Digital image compression

    Get PDF

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
    • …
    corecore