989 research outputs found

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    Significant medical image compression techniques: a review

    Get PDF
    Telemedicine applications allow the patient and doctor to communicate with each other through network services. Several medical image compression techniques have been suggested by researchers in the past years. This review paper offers a comparison of the algorithms and the performance by analysing three factors that influence the choice of compression algorithm, which are image quality, compression ratio, and compression speed. The results of previous research have shown that there is a need for effective algorithms for medical imaging without data loss, which is why the lossless compression process is used to compress medical records. Lossless compression, however, has minimal compression ratio efficiency. The way to get the optimum compression ratio is by segmentation of the image into region of interest (ROI) and non-ROI zones, where the power and time needed can be minimised due to the smaller scale. Recently, several researchers have been attempting to create hybrid compression algorithms by integrating different compression techniques to increase the efficiency of compression algorithms

    An efficient color image compression technique

    Get PDF
    We present a new image compression method to improve visual perception of the decompressed images and achieve higher image compression ratio. This method balances between the compression rate and image quality by compressing the essential parts of the image-edges. The key subject/edge is of more significance than background/non-edge image. Taking into consideration the value of image components and the effect of smoothness in image compression, this method classifies the image components as edge or non-edge. Low-quality lossy compression is applied to non-edge components whereas high-quality lossy compression is applied to edge components. Outcomes show that our suggested method is efficient in terms of compression ratio, bits per-pixel and peak signal to noise ratio

    Enhanced SVD Based Image Compression Technique

    Get PDF
    With the growth of technology and entrance into the Digital World, it has found itself surrounded by a massive quantity of data. Dealing with such huge data/information will often creates difficulties while transmission of data or storage of data. One feasible solution to overcome such difficulties is to use a data compression technique. Image compression is a method in which the storage space or processing space of image is reduced without degrading the image standard or quality. It conjointly reduces the time needed for images to be uploaded over the Internet or downloaded from Internet. JPEG is a necessary technique used for image compression. So, in order to improve the quality of the image, compression is done using different techniques. In this research work, SVD algorithm is used for compression which is giving better result for image compression without any reduction in quality. The modeling of optimized Singular Value Decomposition (SVD) implemented for JPEG Image compression in MATLAB is implemented. SVD is the core part of the JPEG image compression. In JPEG Image Compression, a quantizer follows the SVD. Such structural channel is beneficial for reducing difficulty in the whole JPEG compression/encoding. To overcome the problem of  lossy compression implemented algorithm is designed in order to enhance the performance of compression algorithm with respect to performance evaluation parameters such as, Compression ratio , Bits per pixel , Peak signal to noise ratio, Mean squared error  and Signal to noise ratio

    Texture Structure Analysis

    Get PDF
    abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established.Dissertation/ThesisPh.D. Electrical Engineering 201

    An Intelligent Multi-Resolutional and Rotational Invariant Texture Descriptor for Image Retrieval Systems

    Get PDF
    To find out the identical or comparable images from the large rotated databases with higher retrieval accuracy and lesser time is the challenging task in Content based Image Retrieval systems (CBIR). Considering this problem, an intelligent and efficient technique is proposed for texture based images. In this method, firstly a new joint feature vector is created which inherits the properties of Local binary pattern (LBP) which has steadiness regarding changes in illumination and rotation and discrete wavelet transform (DWT) which is multi-resolutional and multi-oriented along with higher directionality. Secondly, after the creation of hybrid feature vector, to increase the accuracy of the system, classifiers are employed on the combination of LBP and DWT. The performance of two machine learning classifiers is proposed here which are Support Vector Machine (SVM) and Extreme learning machine (ELM). Both proposed methods P1 (LBP+DWT+SVM) and P2 (LBP+DWT+ELM) are tested on rotated Brodatz dataset consisting of 1456 texture images and MIT VisTex dataset of 640 images. In both experiments the results of both the proposed methods are much better than simple combination of DWT +LBP and much other state of art methods in terms of precision and accuracy when different number of images is retrieved.  But the results obtained by ELM algorithm shows some more improvement than SVM. Such as when top 25 images are retrieved then in case of Brodatz database the precision is up to 94% and for MIT VisTex database its value is up to 96% with ELM classifier which is very much superior to other existing texture retrieval methods
    • …
    corecore