291 research outputs found

    K-means based clustering and context quantization

    Get PDF

    EBCOT ALGORITHM BASED INVERSE HALFTONNING ON CBIR

    Get PDF
    A procedure for Content Based Image Retrieval (CBIR) for the formation of picture content descriptor which abusing the upside of low multifaceted nature Order Dither Block Truncation Coding (ODBTC). The quantizer and bitmap picture are the packed type of picture got from the ODBTC method in encoding step. Translating isn't acted in this strategy. It has two picture highlight, for example, Color Co-event Feature (CCF) and Bit Pattern Feature (BPF) for ordering the picture. These highlights are straightforwardly gotten from ODBTC encoded information stream. By contrasting and the BTC picture recovery framework and other prior technique the test result show the proposed strategy is predominant. ODBTC is appropriate for picture pressure and it is a simple and viable descriptor to file the picture in CBIR. Content-based picture recovery is utilized to separate the pictures based on their substance, for example, surface, shading, shape and spatial format. To limit this hole numerous ideas was presented. Also, Images can be put away and extricated dependent on different highlights and one of the unmistakable component is Texture

    Multi-image classification and compression using vector quantization

    Get PDF
    Vector Quantization (VQ) is an image processing technique based on statistical clustering, and designed originally for image compression. In this dissertation, several methods for multi-image classification and compression based on a VQ design are presented. It is demonstrated that VQ can perform joint multi-image classification and compression by associating a class identifier with each multi-spectral signature codevector. We extend the Weighted Bayes Risk VQ (WBRVQ) method, previously used for single-component images, that explicitly incorporates a Bayes risk component into the distortion measure used in the VQ quantizer design and thereby permits a flexible trade-off between classification and compression priorities. In the specific case of multi-spectral images, we investigate the application of the Multi-scale Retinex algorithm as a preprocessing stage, before classification and compression, that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The goals of this research are four-fold: (1) to study the interrelationship between statistical clustering, classification and compression in a multi-image VQ context; (2) to study mixed-pixel classification and combined classification and compression for simulated and actual, multispectral and hyperspectral multi-images; (3) to study the effects of multi-image enhancement on class spectral signatures; and (4) to study the preservation of scientific data integrity as a function of compression. In this research, a key issue is not just the subjective quality of the resulting images after classification and compression but also the effect of multi-image dimensionality on the complexity of the optimal coder design

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    Perceptual Copyright Protection Using Multiresolution Wavelet-Based Watermarking And Fuzzy Logic

    Full text link
    In this paper, an efficiently DWT-based watermarking technique is proposed to embed signatures in images to attest the owner identification and discourage the unauthorized copying. This paper deals with a fuzzy inference filter to choose the larger entropy of coefficients to embed watermarks. Unlike most previous watermarking frameworks which embedded watermarks in the larger coefficients of inner coarser subbands, the proposed technique is based on utilizing a context model and fuzzy inference filter by embedding watermarks in the larger-entropy coefficients of coarser DWT subbands. The proposed approaches allow us to embed adaptive casting degree of watermarks for transparency and robustness to the general image-processing attacks such as smoothing, sharpening, and JPEG compression. The approach has no need the original host image to extract watermarks. Our schemes have been shown to provide very good results in both image transparency and robustness.Comment: 13 pages, 7 figure

    Speech Recognition Using Vector Quantization through Modified K-meansLBG Algorithm

    Get PDF
    In the Vector Quantization, the main task is to generate a good codebook. The distortion measure between the original pattern and the reconstructed pattern should be minimum. In this paper, a proposed algorithm called Modified K-meansLBG algorithm used to obtain a good codebook. The system has shown good performance on limited vocabulary tasks. Keywords: K-means algorithm, LBG algorithm, Vector Quantization, Speech Recognitio

    Weighted Mahalanobis Distance for Hyper-Ellipsoidal Clustering

    Get PDF
    Cluster analysis is widely used in many applications, ranging from image and speech coding to pattern recognition. A new method that uses the weighted Mahalanobis distance (WMD) via the covariance matrix of the individual clusters as the basis for grouping is presented in this thesis. In this algorithm, the Mahalanobis distance is used as a measure of similarity between the samples in each cluster. This thesis discusses some difficulties associated with using the Mahalanobis distance in clustering. The proposed method provides solutions to these problems. The new algorithm is an approximation to the well-known expectation maximization (EM) procedure used to find the maximum likelihood estimates in a Gaussian mixture model. Unlike the EM procedure, WMD eliminates the requirement of having initial parameters such as the cluster means and variances as it starts from the raw data set. Properties of the new clustering method are presented by examining the clustering quality for codebooks designed with the proposed method and competing methods on a variety of data sets. The competing methods are the Linde-Buzo-Gray (LBG) algorithm and the Fuzzy c-means (FCM) algorithm, both of them use the Euclidean distance. The neural network for hyperellipsoidal clustering (HEC) that uses the Mahalnobis distance is also studied and compared to the WMD method and the other techniques as well. The new method provides better results than the competing methods. Thus, this method becomes another useful tool for use in clustering

    The Improvement of Automatic Skin Cancer Detection Algorithm Based on CVQ technique

    Get PDF
    Nowadays, by increasing the number of deaths related to skin cancer, this kind of cancer has been converted as one of the important issues in humans' life. However, the main key is early detection of skin cancer in order to save the life of people. By considering this fact that there is a near similarity between cancer moles and normal ones, attention to artificial systems with the ability of distinguishing between these kinds of moles can be very important, undoubtedly. The accuracy of this kind of system must be considered in order to find better results, especially in the cases which are related to human‘s life. In this paper, with regard to the fact that the raising of a kind of skin cancer, Melanoma, has increasing, we have employed neural networks in the aim of function improvement of an approach based on compressed image technique, namely, Classified Vector Quantization (CVQ) technique. This suggested method has been examined on some images and the results show that this method is a proper way in order to automatic skin cancer detection
    • 

    corecore