2,998 research outputs found

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform

    Computer Vision for Timber Harvesting

    Get PDF

    Medical image processing using fractal functions

    Get PDF
    In this paper, a comparison was made between a modified methods for repeated engineering modeling in order to increase the accuracy of medical images. A comparison was made between different types in terms of classification accuracy. The lacuinartiy feature has also been used to reduce the noise ratio in the received images. The results showed the importance of fractal IFS in medical pulse compression, where a ratio of (98%) was obtained in reducing noise and a ratio of (0.421) in the gap coefficient was obtained. It separated the diseased tissues from the healthy tissues by applying several multi-fractal factors. Fractal image compression is dependent on subjective similarity, with one part of the image being the same as the other part of a similar image. The partial coding is constantly linked to the grayscale images by dividing a color RGB image into three channels - red, green and blue, and is compressed independently by considering each color segment as a specific gray scale image. Based on the smart neural network, the patterns are distinguished for the medical images used by a few learning time and positive error 0.22%

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    Data hiding in images based on fractal modulation and diversity combining

    Get PDF
    The current work provides a new data-embedding infrastructure based on fractal modulation. The embedding problem is tackled from a communications point of view. The data to be embedded becomes the signal to be transmitted through a watermark channel. The channel could be the image itself or some manipulation of the image. The image self noise and noise due to attacks are the two sources of noise in this paradigm. At the receiver, the image self noise has to be suppressed, while noise due to the attacks may sometimes be predicted and inverted. The concepts of fractal modulation and deterministic self-similar signals are extended to 2-dimensional images. These novel techniques are used to build a deterministic bi-homogenous watermark signal that embodies the binary data to be embedded. The binary data to be embedded, is repeated and scaled with different amplitudes at each level and is used as the wavelet decomposition pyramid. The binary data is appended with special marking data, which is used during demodulation, to identify and correct unreliable or distorted blocks of wavelet coefficients. This specially constructed pyramid is inverted using the inverse discrete wavelet transform to obtain the self-similar watermark signal. In the data embedding stage, the well-established linear additive technique is used to add the watermark signal to the cover image, to generate the watermarked (stego) image. Data extraction from a potential stego image is done using diversity combining. Neither the original image nor the original binary sequence (or watermark signal) is required during the extraction. A prediction of the original image is obtained using a cross-shaped window and is used to suppress the image self noise in the potential stego image. The resulting signal is then decomposed using the discrete wavelet transform. The number of levels and the wavelet used are the same as those used in the watermark signal generation stage. A thresholding process similar to wavelet de-noising is used to identify whether a particular coefficient is reliable or not. A decision is made as to whether a block is reliable or not based on the marking data present in each block and sometimes corrections are applied to the blocks. Finally the selected blocks are combined based on the diversity combining strategy to extract the embedded binary data
    corecore