8 research outputs found

    Adaptive CSLBP compressed image hashing

    Get PDF
    Hashing is popular technique of image authentication to identify malicious attacks and it also allows appearance changes in an image in controlled way. Image hashing is quality summarization of images. Quality summarization implies extraction and representation of powerful low level features in compact form. Proposed adaptive CSLBP compressed hashing method uses modified CSLBP (Center Symmetric Local Binary Pattern) as a basic method for texture extraction and color weight factor derived from L*a*b* color space. Image hash is generated from image texture. Color weight factors are used adaptively in average and difference forms to enhance discrimination capability of hash. For smooth region, averaging of colours used while for non-smooth region, color differencing is used. Adaptive CSLBP histogram is a compressed form of CSLBP and its quality is improved by adaptive color weight factor. Experimental results are demonstrated with two benchmarks, normalized hamming distance and ROC characteristics. Proposed method successfully differentiate between content change and content persevering modifications for color images

    Modified CSLBP

    Get PDF
    Image hashing is an efficient way to handle digital data authentication problem. Image hashing represents quality summarization of image features in compact manner. In this paper, the modified center symmetric local binary pattern (CSLBP) image hashing algorithm is proposed. Unlike CSLBP 16 bin histogram, Modified CSLBP generates 8 bin histogram without compromise on quality to generate compact hash. It has been found that, uniform quantization on a histogram with more bin results in more precision loss. To overcome quantization loss, modified CSLBP generates the two histogram of a four bin. Uniform quantization on a 4 bin histogram results in less precision loss than a 16 bin histogram. The first generated histogram represents the nearest neighbours and second one is for the diagonal neighbours. To enhance quality in terms of discrimination power, different weight factor are used during histogram generation. For the nearest and the diagonal neighbours, two local weight factors are used. One is the Standard Deviation (SD) and other is the Laplacian of Gaussian (LoG). Standard deviation represents a spread of data which captures local variation from mean. LoG is a second order derivative edge detection operator which detects edges well in presence of noise. The proposed algorithm is resilient to the various kinds of attacks. The proposed method is tested on database having malicious and non-malicious images using benchmark like NHD and ROC which confirms theoretical analysis. The experimental results shows good performance of the proposed method for various attacks despite the short hash length

    Analysis of color image features extraction using texture methods

    Get PDF
    A digital color images are the most important types of data currently being traded; they are used in many vital and important applications. Hence, the need for a small data representation of the image is an important issue. This paper will focus on analyzing different methods used to extract texture features for a color image. These features can be used as a primary key to identify and recognize the image. The proposed discrete wave equation DWE method of generating color image key will be presented, implemented and tested. This method showed that the percentage of reduction in the key size is 85% compared with other methods

    Image authentication using LBP-based perceptual image hashing

    Get PDF
    Feature extraction is a main step in all perceptual image hashing schemes in which robust features will led to better results in perceptual robustness. Simplicity, discriminative power, computational efficiency and robustness to illumination changes are counted as distinguished properties of Local Binary Pattern features. In this paper, we investigate the use of local binary patterns for perceptual image hashing. In feature extraction, we propose to use both sign and magnitude information of local differences. So, the algorithm utilizes a combination of gradient-based and LBP-based descriptors for feature extraction. To provide security needs, two secret keys are incorporated in feature extraction and hash generation steps. Performance of the proposed hashing method is evaluated with an important application in perceptual image hashing scheme: image authentication. Experiments are conducted to show that the present method has acceptable robustness against perceptual content-preserving manipulations. Moreover, the proposed method has this capability to localize the tampering area, which is not possible in all hashing schemes

    Comparación de algoritmos de hashing para la identificación de anuncios en TV

    Get PDF
    En este documento se lleva a cabo un estudio sobre diferentes algoritmos que permiten tomar huellas de imágenes en base a sus características. Esto se extrapola para su uso en la comparación de archivos de vídeo, de modo que, tomando el hash del primer fotograma no negro de cada archivo, dicho hash sirva tanto para distinguir cada vídeo de los demás como para identificar un vídeo pese a modificaciones tales como adición de ruido o la reducción de la tasa de vídeo, entre otras. Inicialmente, se toman una serie de archivos de vídeo de prueba, correspondientes a spots publicitarios y se realiza una serie de modificaciones sobre ellos: adición de ruido, reducción de la tasa de bit, reescalado… Una vez hecho esto, se calcula el hash de los fotogramas no negros de los vídeos originales y de los modificados con el objetivo de compararlos entre sí para establecer umbrales personalizados para cada algoritmo y evaluar sus prestaciones, además de calcular el coste computacional de cada uno de ellos. Una vez que se tienen todos estos datos, se comparan los algoritmos y se evalúa cuáles de ellos ofrecen las características más atractivas.In this document, we study different algorithms that allow us to take fingerprints of images based on their features. Those algorithms can be used to compare video files by computing the hash of the first non-black frame of two different videos and checking the differences between the bits of their hashes. Comparations between hashes allow us both identify modified videos and detect different video files. Initially, we take some televisión ads. Then, some modifications are aplied to those video files: noise addition, bit rate reduction, frame resize, etc. When it is done, we calculate the hash of every non-black frame of both original and modified videos files to compare them in order to check how similar they are: the Hamming distance between the hashes will allow us to evaluate the performance of every algorithm and their computational cost. Once we have all of the data, we can compare the algorithms and choose the one who presents the most attractive features.Universidad de Sevilla. Máster en Ingeniería de Telecomunicació

    Enhanced Block-Based Copy-Move Image Forgery Detection Using K-Means Clustering Technique

    Get PDF
    In this thesis, the effect of feature type and matching method has been analyzed by comparing different combinations of matching method – feature type for copy-move image forgery detection. The results showed an interaction between some of the features and some of the matching methods. Due to the importance of matching process, this thesis focused on improving the matching process by proposing an enhanced block-based copy-move forgery detection pipeline. The proposed pipeline relied on clustering the image blocks into clusters, and then independently performing the matching of the blocks within each cluster which will reduce the time required for matching and increase the true positive ratio (TPR) as well. In order to deploy the proposed pipeline, two combinations of matching method - feature type are considered. In the first case, Zernike Moments (ZMs) were combined with Locality Sensitive Hashing (LSH) and tested on three datasets. The experimental results showed that the proposed pipeline reduced the processing time by 73.05% to 84.70% and enhanced the accuracy of detection by 5.56% to 25.43%. In the second case, Polar Cosine Transform (PCT) was combined with Lexicographical Sort (LS). Although the proposed pipeline could not reduce the processing time, it enhanced the accuracy of detection by 32.46%. The obtained results were statistically analyzed, and it was proven that the proposed pipeline can enhance the accuracy of detection significantly based on the comparison with other two methods
    corecore