1,370 research outputs found

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    A novel fast fractal image compression method based on distance clustering in high dimensional sphere surface

    Get PDF
    Fractal encoding method becomes an effective image compression method because of its high compression ratio and short decompressing time. But one problem of known fractal compression method is its high computational complexity and consequent long compressing time. To address this issue, in this paper, distance clustering in high dimensional sphere surface is applied to speed up the fractal compression method. Firstly, as a preprocessing strategy, an image is divided into blocks, which are mapped on high dimensional sphere surface. Secondly, a novel image matching method is presented based on distance clustering on high dimensional sphere surface. Then, the correctness and effectiveness properties of the mentioned method are analyzed. Finally, experimental results validate the positive performance gain of the method

    Distributed video through telecommunication networks using fractal image compression techniques

    Get PDF
    The research presented in this thesis investigates the use of fractal compression techniques for a real time video distribution system. The motivation for this work was that the method has some useful properties which satisfy many requirements for video compression. In addition, as a novel technique, the fractal compression method has a great potential. In this thesis, we initially develop an understanding of the state of the art in image and video compression and describe the mathematical concepts and basic terminology of the fractal compression algorithm. Several schemes which aim to the improve of the algorithm, for still images are then examined. Amongst these, two novel contributions are described. The first is the partitioning of the image into sections which resulted insignificant reduction of the compression time. In the second, the use of the median metric as alternative to the RMS was considered but was not finally adopted, since the RMS proved to be a more efficient measure. The extension of the fractal compression algorithm from still images to image sequences is then examined and three different schemes to reduce the temporal redundancy of the video compression algorithm are described. The reduction in the execution time of the compression algorithm that can be obtained by the techniques described is significant although real time execution has not yet been achieved. Finally, the basic concepts of distributed programming and networks, as basic elements of a video distribution system, are presented and the hardware and software components of a fractal video distribution system are described. The implementation of the fractal compression algorithm on a TMS320C40 is also considered for speed benefits and it is found that a relatively large number of processors are needed for real time execution

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform

    Detectability model for the evaluation of lossy compression methods on radiographic images

    Get PDF
    The purpose of image data compression is to represent data efficiently without loss of information. This involves identification and removal of unnecessary information. Uncompressed image data is typically represented in such a way so that it is highly redundant. Need for data reduction arises due to limitation on storage space or transmission time. Although the storage capacities of magnetic media increases, the demand for data compression has been growing steadily. The Nuclear Regulatory Commission requires that the radiographs be stored for 100 years. The film radiograph degrades due to aging. To avoid this generally the radiograph is digitized between 35 and 100 micron spatial resolution and 12 bits. For a 11x14 inch radiograph this requires on the order of 30 Mbytes for storage. Data compression is necessary to increase the number of images that can be stored. Various factors used in the evaluation of compression are the amount of compression provided, speed of compression and decompression, memory requirements and the mean square error (MSE). Since the radiographs are viewed by the human eye, it is very important that the compression does not introduce any artifacts that are visible. It is necessary to evaluate the visual impact of the error due to compression. In this thesis, a method is presented which calculates the visual distortion of the compressed image as compared to the original image. This method is based on a model of the human eye

    Parallel implementation of fractal image compression

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.Fractal image compression exploits the piecewise self-similarity present in real images as a form of information redundancy that can be eliminated to achieve compression. This theory based on Partitioned Iterated Function Systems is presented. As an alternative to the established JPEG, it provides a similar compression-ratio to fidelity trade-off. Fractal techniques promise faster decoding and potentially higher fidelity, but the computationally intensive compression process has prevented commercial acceptance. This thesis presents an algorithm mapping the problem onto a parallel processor architecture, with the goal of reducing the encoding time. The experimental work involved implementation of this approach on the Texas Instruments TMS320C80 parallel processor system. Results indicate that the fractal compression process is unusually well suited to parallelism with speed gains approximately linearly related to the number of processors used. Parallel processing issues such as coherency, management and interfacing are discussed. The code designed incorporates pipelining and parallelism on all conceptual and practical levels ensuring that all resources are fully utilised, achieving close to optimal efficiency. The computational intensity was reduced by several means, including conventional classification of image sub-blocks by content with comparisons across class boundaries prohibited. A faster approach adopted was to perform estimate comparisons between blocks based on pixel value variance, identifying candidates for more time-consuming, accurate RMS inter-block comparisons. These techniques, combined with the parallelism, allow compression of 512x512 pixel x 8 bit images in under 20 seconds, while maintaining a 30dB PSNR. This is up to an order of magnitude faster than reported for conventional sequential processor implementations. Fractal based compression of colour images and video sequences is also considered. The work confirms the potential of fractal compression techniques, and demonstrates that a parallel implementation is appropriate for addressing the compression time problem. The processor system used in these investigations is faster than currently available PC platforms, but the relevance lies in the anticipation that future generations of affordable processors will exceed its performance. The advantages of fractal image compression may then be accessible to the average computer user, leading to commercial acceptance

    Parallel implementation of fractal image compression

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.Fractal image compression exploits the piecewise self-similarity present in real images as a form of information redundancy that can be eliminated to achieve compression. This theory based on Partitioned Iterated Function Systems is presented. As an alternative to the established JPEG, it provides a similar compression-ratio to fidelity trade-off. Fractal techniques promise faster decoding and potentially higher fidelity, but the computationally intensive compression process has prevented commercial acceptance. This thesis presents an algorithm mapping the problem onto a parallel processor architecture, with the goal of reducing the encoding time. The experimental work involved implementation of this approach on the Texas Instruments TMS320C80 parallel processor system. Results indicate that the fractal compression process is unusually well suited to parallelism with speed gains approximately linearly related to the number of processors used. Parallel processing issues such as coherency, management and interfacing are discussed. The code designed incorporates pipelining and parallelism on all conceptual and practical levels ensuring that all resources are fully utilised, achieving close to optimal efficiency. The computational intensity was reduced by several means, including conventional classification of image sub-blocks by content with comparisons across class boundaries prohibited. A faster approach adopted was to perform estimate comparisons between blocks based on pixel value variance, identifying candidates for more time-consuming, accurate RMS inter-block comparisons. These techniques, combined with the parallelism, allow compression of 512x512 pixel x 8 bit images in under 20 seconds, while maintaining a 30dB PSNR. This is up to an order of magnitude faster than reported for conventional sequential processor implementations. Fractal based compression of colour images and video sequences is also considered. The work confirms the potential of fractal compression techniques, and demonstrates that a parallel implementation is appropriate for addressing the compression time problem. The processor system used in these investigations is faster than currently available PC platforms, but the relevance lies in the anticipation that future generations of affordable processors will exceed its performance. The advantages of fractal image compression may then be accessible to the average computer user, leading to commercial acceptance

    The Performance Of Fractal Image Compression On Different Imaging Modalities Using Objective Quality Measures

    Get PDF
    In the arena of, high speed data transmission and requirement of large data to store in minimum available space, compression is very prominent aspect. There are various techniques, which have been applied for this purpose. One of most useful technique is fractal image compression. In which the main errand is to lessen the transmission time and storage capacity. In this paper, the primary objective is to determine the performance of fractal image compression using quad-tree decomposition applied on different modalities, using objective quality factors such as mean square error, Peak Signal to Noise Ratio (PSNR), average difference, max difference, normalized co-relation, mean absolute error, structural co-relation. From the repercussion we observed that, proposed method has better results than the existing technique

    Fractal compression and analysis on remotely sensed imagery

    Get PDF
    Remote sensing images contain huge amount of geographical information and reflect the complexity of geographical features and spatial structures. As the means of observing and describing geographical phenomena, the rapid development of remote sensing has provided an enormous amount of geographical information. The massive information is very useful in a variety of applications but the sheer bulk of this information has increased beyond what can be analyzed and used efficiently and effectively. This uneven increase in the technologies of gathering and analyzing information has created difficulties in its storage, transfer, and processing. Fractal geometry provides a means of describing and analyzing the complexity of different geographical features in remotely sensed images. It also provides a more powerful tool to compress the remote sensing data than traditional methods. This study suggests, for the first time, the implementation of this usage of fractals to remotely sensed images. In this study, based on fractal concepts, compression and decompression algorithms were developed and applied to Landsat TM images of eight study areas with different land cover types; the fidelity and efficiency of the algorithms and their relationship with the spatial complexity of the images were evaluated. Three research hypotheses were tested and the fractal compression was compared with two commonly used compression methods, JPEG and WinZip. The effects of spatial complexity and pixel resolution on the compression rate were also examined. The results from this study show that the fractal compression method has higher compression rate than JPEG and WinZip. As expected, higher compression rates were obtained from images of lower complexity and from images of lower spatial resolution (larger pixel size). This study shows that in addition to the fractal’s use in measuring, describing, and simulating the roughness of landscapes in geography, fractal techniques were useful in remotely sensed image compression. Moreover, the compression technique can be seen as a new method of measuring the diverse landscapes and geographical features. As such, this study has introduced a new and advantageous passageway for fractal applications and their important applications in remote sensing
    corecore