251 research outputs found

    An effieient algorithm for fractal image coding using kick-out and zero contrast conditions

    Get PDF
    Title should be: An efficient algorithm for fractal image coding using kick-out and zero contrast conditionsCorrect title: An efficient algorithm for fractal image coding using kick-out and zero contrast conditionsRefereed conference paper2003-2004 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    A fast fractal image coding based on kick-out and zero contrast conditions

    Get PDF
    2003-2004 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    Efficient Fractal Image Coding using Fast Fourier Transform

    Get PDF
    The fractal coding is a novel technique forimage compression. Though the technique has manyattractive features, the large encoding time makes itunsuitable for real time applications. In this paper, anefficient algorithm for fractal encoding which operateson entire domain image instead of overlapping domainblocks is presented.The algorithm drastically reducesthe encoding time as compared to classical full searchmethod. The reduction in encoding time is mainly dueto use of modified crosscorrelation based similaritymeasure. The implemented algorithm employs exhaustivesearch of domain blocks and their isometry transformationsto investigate their similarity with everyrange block. The application of Fast Fourier Transformin similarity measure calculation speeds up theencoding process. The proposed eight isometry transformationsof a domain block exploit the properties ofDiscrete Fourier Transform to minimize the number ofFast Fourier Transform calculations. The experimentalstudies on the proposed algorithm demonstrate that theencoding time is reduced drastically with average speedupfactor of 538 with respect to the classical fullsearch method with comparable values of Peak SignalTo Noise Ratio

    Statistical Analysis of Fractal Image Coding and Fixed Size Partitioning Scheme

    Get PDF
    Fractal Image Compression (FIC) is a state of the art technique used for high compression ratio. But it lacks behind in its encoding time requirements. In this method an image is divided into non-overlapping range blocks and overlapping domain blocks. The total number of domain blocks is larger than the range blocks. Similarly the sizes of the domain blocks are twice larger than the range blocks. Together all domain blocks creates a domain pool. A range block is compared with all possible domains block for similarity measure. So the domain is decimated for a proper domainrange comparison. In this paper a novel domain pool decimation and reduction technique has been developed which uses the median as a measure of the central tendency instead of the mean (or average) of the domain pixel values. However this process is very time consuming

    Méthodes hybrides pour la compression d'image

    Get PDF
    Abstract : The storage and transmission of images is the basis of digital electronic communication. In order to communicate a maximum amount of information in a given period of time, one needs to look for efficient ways to represent the information communicated. Designing optimal representations is the subject of data compression. In this work, the compression methods consist of two steps in general, which are encoding and decoding. During encoding, one expresses the image by less data than the original and stores the data information; during decoding, one decodes the compressed data to show the decompressed image. In Chapter 1, we review some basic compression methods which are important in understanding the concepts of encoding and information theory as tools to build compression models and measure their efficiency. Further on, we focus on transform methods for compression, particularly we discuss in details Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). We also analyse the hybrid method which combines DCT and DWT together to compress image data. For the sake of comparison, we discuss another total different method which is fractal image compression that compresses image data by taking advantage of self-similarity of images. We propose the hybrid method of fractal image compression and DCT based on their characteristic. Several experimental results are provided to show the outcome of the comparison between the discussed methods. This allows us to conclude that the hybrid method performs more efficiently and offers a relatively good quality of compressed image than some particular methods, but also there is some improvement can be made in the future.Le stockage et la transmission d'images sont à la base de la communication électronique numérique. Afin de communiquer un maximum d'informations dans un laps de temps donné, il faut rechercher des moyens efficaces de représenter les informations communiquées. L'objectif de base de la compression de données est la conception d'algorithmes qui permettent des représentations optimales des données. Dans ce travail, les méthodes de compression consistent en deux étapes en général, qui sont l'encodage et le décodage. Lors du codage, on exprime l'image par moins de données que l'image originale et stocke les informations obtenues; lors du décodage, on décode les données compressées pour montrer l'image décompressée. Dans le chapitre 1, nous passons en revue quelques méthodes de compression de base qui sont importantes pour comprendre les concepts d'encodage et de théorie de l'information en tant qu'outils pour construire des modèles de compression et mesurer leur efficacité. Plus loin, nous nous concentrons sur les méthodes de transformation pour la compression, en particulier nous discutons en détail des méthodes de transformée en cosinus discrète (DCT) et Transformée en ondelettes discrète (DWT). Nous analysons également la méthode hybride qui combine DCT et DWT pour compresser les données d'image. À des fins de comparaison, nous discutons d'une autre méthode totalement différente qui est la compression d'image fractale qui comprime les données d'image en tirant partie de l'autosimilarité des images. Nous proposons la méthode hybride de compression d'image fractale et DCT en fonction de leurs caractéristiques. Plusieurs résultats expérimentaux sont fournis pour montrer le résultat de la comparaison entre les méthodes discutées. Cela nous permet de conclure que la méthode hybride fonctionne plus efficacement et offre une qualité d'image compressée relativement meilleure que certaines méthodes, mais il y a aussi des améliorations qui peuvent être apportées à l'avenir

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
    corecore