566 research outputs found

    A survey of parallel algorithms for fractal image compression

    Get PDF
    This paper presents a short survey of the key research work that has been undertaken in the application of parallel algorithms for Fractal image compression. The interest in fractal image compression techniques stems from their ability to achieve high compression ratios whilst maintaining a very high quality in the reconstructed image. The main drawback of this compression method is the very high computational cost that is associated with the encoding phase. Consequently, there has been significant interest in exploiting parallel computing architectures in order to speed up this phase, whilst still maintaining the advantageous features of the approach. This paper presents a brief introduction to fractal image compression, including the iterated function system theory upon which it is based, and then reviews the different techniques that have been, and can be, applied in order to parallelize the compression algorithm

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    Statistical Analysis of Fractal Image Coding and Fixed Size Partitioning Scheme

    Get PDF
    Fractal Image Compression (FIC) is a state of the art technique used for high compression ratio. But it lacks behind in its encoding time requirements. In this method an image is divided into non-overlapping range blocks and overlapping domain blocks. The total number of domain blocks is larger than the range blocks. Similarly the sizes of the domain blocks are twice larger than the range blocks. Together all domain blocks creates a domain pool. A range block is compared with all possible domains block for similarity measure. So the domain is decimated for a proper domainrange comparison. In this paper a novel domain pool decimation and reduction technique has been developed which uses the median as a measure of the central tendency instead of the mean (or average) of the domain pixel values. However this process is very time consuming

    A reduced domain pool based on DCT for a fast fractal image encoding

    Get PDF
    Fractal image compression is time consuming due to the search of the matching between range and domain blocks. In order to improve this compression method, we propose firstly, in this paper, a fast method for reducing the computational complexity of fractal encoding by reducing the size of the domain pool. This reduction is based on the lowest horizontal and vertical DCT coefficients of domain blocks. The experimental results on the test images show that the proposed method reduce the time computation and reach a high speedup factor without decreasing the image quality. Secondly, we combine our method to the AP2D approach which uses two domain pools in two steps of encoding. A more reduction of encoding time is obtained without decreasing the image quality

    A study and some experimental work of digital image and video watermarking

    Get PDF
    The rapid growth of digitized media and the emergence of digital networks have created a pressing need for copyright protection and anonymous communications schemes. Digital watermarking (or data hiding in a more general term) is a kind of steganography technique by adding information into a digital data stream. Several most important watermarking schemes applied to multilevel and binary still images and digital videos were studied. They include schemes based on DCT (Discrete Cosine Transform), DWT (Discrete Wavelet Transform), and fractal transforms. The question whether these invisible watermarking techniques can resolve the issue of rightful ownership of intellectual properties was discussed. The watermarking schemes were further studied from malicious attack point of view, which is considered an effective way to advance the watermarking techniques. In particular, the StirMark robustness tests based on geometrical distortion were carried out. A binary watermarking scheme applied in the DCT domain is presented in this research project. The effect of the binarization procedure necessarily encountered in dealing with binary document images is found so strong that most of conventional embedding schemes fail in dealing with watermarking of binary document images. Some particular measures have to be taken. The initial simulation results indicate that the proposed technique is promising though further efforts need to be made

    Combining Fractal Coding and Orthogonal Linear Transforms

    Get PDF

    The Design and Implementation of an Image Segmentation System for Forest Image Analysis

    Get PDF
    The United States Forest Service (USFS) is developing software systems to evaluate forest resources with respect to qualities such as scenic beauty and vegetation structure. Such evaluations usually involve a large amount of human labor. In this thesis, I will discuss the design and implementation of a digital image segmentation system, and how to apply it to analyze forest images so that automated forest resource evaluation can be achieved. The first major contribution of the thesis is the evaluation of various feature design schemes for segmenting forest images. The other major contribution of this thesis is the development of a pattern recognition-based image segmentation algorithm. The best system performance was a 61.4% block classification error rate, achieved by combining color histograms with entropy. This performance is better than that obtained by an ?intelligent? guess based on prior knowledge about the categories under study, which is 68.0%

    Maximum Energy Subsampling: A General Scheme For Multi-resolution Image Representation And Analysis

    Get PDF
    Image descriptors play an important role in image representation and analysis. Multi-resolution image descriptors can effectively characterize complex images and extract their hidden information. Wavelets descriptors have been widely used in multi-resolution image analysis. However, making the wavelets transform shift and rotation invariant produces redundancy and requires complex matching processes. As to other multi-resolution descriptors, they usually depend on other theories or information, such as filtering function, prior-domain knowledge, etc.; that not only increases the computation complexity, but also generates errors. We propose a novel multi-resolution scheme that is capable of transforming any kind of image descriptor into its multi-resolution structure with high computation accuracy and efficiency. Our multi-resolution scheme is based on sub-sampling an image into an odd-even image tree. Through applying image descriptors to the odd-even image tree, we get the relative multi-resolution image descriptors. Multi-resolution analysis is based on downsampling expansion with maximum energy extraction followed by upsampling reconstruction. Since the maximum energy usually retained in the lowest frequency coefficients; we do maximum energy extraction through keeping the lowest coefficients from each resolution level. Our multi-resolution scheme can analyze images recursively and effectively without introducing artifacts or changes to the original images, produce multi-resolution representations, obtain higher resolution images only using information from lower resolutions, compress data, filter noise, extract effective image features and be implemented in parallel processing
    corecore