272 research outputs found

    Efficient Fractal Image Coding using Fast Fourier Transform

    Get PDF
    The fractal coding is a novel technique forimage compression. Though the technique has manyattractive features, the large encoding time makes itunsuitable for real time applications. In this paper, anefficient algorithm for fractal encoding which operateson entire domain image instead of overlapping domainblocks is presented.The algorithm drastically reducesthe encoding time as compared to classical full searchmethod. The reduction in encoding time is mainly dueto use of modified crosscorrelation based similaritymeasure. The implemented algorithm employs exhaustivesearch of domain blocks and their isometry transformationsto investigate their similarity with everyrange block. The application of Fast Fourier Transformin similarity measure calculation speeds up theencoding process. The proposed eight isometry transformationsof a domain block exploit the properties ofDiscrete Fourier Transform to minimize the number ofFast Fourier Transform calculations. The experimentalstudies on the proposed algorithm demonstrate that theencoding time is reduced drastically with average speedupfactor of 538 with respect to the classical fullsearch method with comparable values of Peak SignalTo Noise Ratio

    Une technique d'accélération pour la compression fractale d'images

    Get PDF
    Les techniques fractales de compression d'images souffrent encore aujourd'hui de temps de codage très importants. Nous proposons ici un nouvel algorithme récursif d'optimisation, couplé à un algorithme de compression de type Jacquin. Un prédicat numérique est utilisé pour déterminer quel bloc candidat peut correspondre à un bloc initial donné, ce qui permet d'éviter des comparaisons coûteuses en temps de calcul. La méthode permet d'obtenir des accélérations considérables pour un prédicat simple comme l'inclusion d'histogrammes des niveaux de gris. Elle est de surcroît compatible avec d'autres méthodes d'accélération

    A fast fractal image coding based on kick-out and zero contrast conditions

    Get PDF
    2003-2004 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Flexible Network Binarization with Layer-wise Priority

    Full text link
    How to effectively approximate real-valued parameters with binary codes plays a central role in neural network binarization. In this work, we reveal an important fact that binarizing different layers has a widely-varied effect on the compression ratio of network and the loss of performance. Based on this fact, we propose a novel and flexible neural network binarization method by introducing the concept of layer-wise priority which binarizes parameters in inverse order of their layer depth. In each training step, our method selects a specific network layer, minimizes the discrepancy between the original real-valued weights and its binary approximations, and fine-tunes the whole network accordingly. During the iteration of the above process, it is significant that we can flexibly decide whether to binarize the remaining floating layers or not and explore a trade-off between the loss of performance and the compression ratio of model. The resulting binary network is applied for efficient pedestrian detection. Extensive experimental results on several benchmarks show that under the same compression ratio, our method achieves much lower miss rate and faster detection speed than the state-of-the-art neural network binarization method.Comment: More experiments on image classification are planne

    A fractal image compression algorithm based on improved imperialist competitive algorithm

    Get PDF
    Fractal image compression (FIC) is a lossy compression method that has the potential to improve the performance of image transmission and image storage and provide security against illicit monitoring. The important features of FIC are high compression ratio and high resolution of decompressed images but the main problem of FIC is the computational complexity of the algorithm. Besides that, the FIC also suffers from a high number of Mean Square Error (MSE) computations for the best matching search between range blocks and domain blocks, which limits the algorithm. In this thesis, two approaches are proposed. Firstly, a new algorithm based on Imperialist competitive algorithm (ICA) is introduced. This is followed by a two-tier algorithm as the second approach to improve further the performance of the algorithm and reduce the MSE computation of FIC. In the first tier, based on edge property, all the range and domain blocks are classified using Discrete Cosine Transform. In the second tier, ICA is used according to the classified blocks. In the ICA, the solution is divided into two groups known as developed and undeveloped countries to maintain the quality of the retrieved image and accelerate the algorithm operation. The MSE value is only calculated for the developed countries. Experimental results show that the proposed algorithm performed better than Genetic algorithms (GAs) and Full-search algorithm in terms of MSE computation. Moreover, in terms of Peak Signal-to-Noise Ratio, the approaches produced high quality decompressed image which is better than that of the GAs

    Digital Image Processing

    Get PDF
    Newspapers and the popular scientific press today publish many examples of highly impressive images. These images range, for example, from those showing regions of star birth in the distant Universe to the extent of the stratospheric ozone depletion over Antarctica in springtime, and to those regions of the human brain affected by Alzheimer’s disease. Processed digitally to generate spectacular images, often in false colour, they all make an immediate and deep impact on the viewer’s imagination and understanding. Professor Jonathan Blackledge’s erudite but very useful new treatise Digital Image Processing: Mathematical and Computational Methods explains both the underlying theory and the techniques used to produce such images in considerable detail. It also provides many valuable example problems - and their solutions - so that the reader can test his/her grasp of the physical, mathematical and numerical aspects of the particular topics and methods discussed. As such, this magnum opus complements the author’s earlier work Digital Signal Processing. Both books are a wonderful resource for students who wish to make their careers in this fascinating and rapidly developing field which has an ever increasing number of areas of application. The strengths of this large book lie in: • excellent explanatory introduction to the subject; • thorough treatment of the theoretical foundations, dealing with both electromagnetic and acoustic wave scattering and allied techniques; • comprehensive discussion of all the basic principles, the mathematical transforms (e.g. the Fourier and Radon transforms), their interrelationships and, in particular, Born scattering theory and its application to imaging systems modelling; discussion in detail - including the assumptions and limitations - of optical imaging, seismic imaging, medical imaging (using ultrasound), X-ray computer aided tomography, tomography when the wavelength of the probing radiation is of the same order as the dimensions of the scatterer, Synthetic Aperture Radar (airborne or spaceborne), digital watermarking and holography; detail devoted to the methods of implementation of the analytical schemes in various case studies and also as numerical packages (especially in C/C++); • coverage of deconvolution, de-blurring (or sharpening) an image, maximum entropy techniques, Bayesian estimators, techniques for enhancing the dynamic range of an image, methods of filtering images and techniques for noise reduction; • discussion of thresholding, techniques for detecting edges in an image and for contrast stretching, stochastic scattering (random walk models) and models for characterizing an image statistically; • investigation of fractal images, fractal dimension segmentation, image texture, the coding and storing of large quantities of data, and image compression such as JPEG; • valuable summary of the important results obtained in each Chapter given at its end; • suggestions for further reading at the end of each Chapter. I warmly commend this text to all readers, and trust that they will find it to be invaluable. Professor Michael J Rycroft Visiting Professor at the International Space University, Strasbourg, France, and at Cranfield University, England

    Efficient architectures of heterogeneous fpga-gpu for 3-d medical image compression

    Get PDF
    The advent of development in three-dimensional (3-D) imaging modalities have generated a massive amount of volumetric data in 3-D images such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US). Existing survey reveals the presence of a huge gap for further research in exploiting reconfigurable computing for 3-D medical image compression. This research proposes an FPGA based co-processing solution to accelerate the mentioned medical imaging system. The HWT block implemented on the sbRIO-9632 FPGA board is Spartan 3 (XC3S2000) chip prototyping board. Analysis and performance evaluation of the 3-D images were been conducted. Furthermore, a novel architecture of context-based adaptive binary arithmetic coder (CABAC) is the advanced entropy coding tool employed by main and higher profiles of H.264/AVC. This research focuses on GPU implementation of CABAC and comparative study of discrete wavelet transform (DWT) and without DWT for 3-D medical image compression systems. Implementation results on MRI and CT images, showing GPU significantly outperforming single-threaded CPU implementation. Overall, CT and MRI modalities with DWT outperform in term of compression ratio, peak signal to noise ratio (PSNR) and latency compared with images without DWT process. For heterogeneous computing, MRI images with various sizes and format, such as JPEG and DICOM was implemented. Evaluation results are shown for each memory iteration, transfer sizes from GPU to CPU consuming more bandwidth or throughput. For size 786, 486 bytes JPEG format, both directions consumed bandwidth tend to balance. Bandwidth is relative to the transfer size, the larger sizing will take more latency and throughput. Next, OpenCL implementation for concurrent task via dedicated FPGA. Finding from implementation reveals, OpenCL on batch procession mode with AOC techniques offers substantial results where the amount of logic, area, register and memory increased proportionally to the number of batch. It is because of the kernel will copy the kernel block refer to batch number. Therefore memory bank increased periodically related to kernel block. It was found through comparative study that the tree balance and unroll loop architecture provides better achievement, in term of local memory, latency and throughput
    corecore