10 research outputs found

    Content Authentication and Forge Detection using Perceptual Hash for Image Database

    Get PDF
    Popularity of digital technology is very high. Number of digital images are being created and stored every day. This introduces a problem for managing image databases and security of images. One cannot determine if an image already exists in a database without exhaustively searching through all the entries. Further complication arises from the fact that two images appearing identical to the human eye may have distinct digital representations, making it difficult to compare a pair of images. Also the security of database server is questionable. The proposed framework provides the content authentication and forges detection of image. This can be done by generating perceptual image hash using SIFT algorithm, Perceptual image hash also known as perceptual image signature. It has been proposed as a primitive method to solve problems of image content authentication. The perceptual image hash is generated by using the perceptual features that are in accordance with human’s visual characteristics. It allows tampering of images to permissible extent e.g. improving slight brightness or contrast in image. A perceptual image hash is expected to be able to survive unintentional distortion and reject malicious tampering within an acceptable extend .Therefore it provides a more efficient approach to analyzing changes of image perceptual content and make sure database server is authenticated or not

    A Short Survey on Perceptual Hash Function

    Get PDF
    The authentication of digital image has become more important as these images can be easily manipulated by using image processing tools leading to various problems such as copyright infringement and hostile tampering to the image contents. It is almost impossible to distinguish subjectively which images are original and which have been manipulated. There are several cryptographic hash functions that map the input data to short binary strings but these traditional cryptographic hash functions is not suitable for image authentication as they are very sensitive to every single bit of input data. When using a cryptographic hash function, the change of even one bit of the original data results in a radically different value. A modified image should be detected as authentic by the hash function and at the same time must be robust against incidental and legitimate modifications on multimedia data. The main aim of this paper is to present a survey of perceptual hash functions for image authentication.Keywords: Hash function, image authentication*Cite as: Arambam Neelima, Kh. Manglem Singh, “A Short Survey on Perceptual Hash Function†ADBU-J.Engg Tech, 1(2014) 0011405(8pp

    First steps toward image phylogeny

    Full text link
    Abstract—In this paper, we introduce and formally define a new problem, Image Phylogeny Tree (IPT): to find the structure of transformations, and their parameters, that generate a given set of near duplicate images. This problem has direct applications in security, forensics, and copyright enforcement. We devise a method for calculating an asymmetric dissimilarity matrix from a set of near duplicate images. We also describe a new algorithm to build an IPT. We also analyze our algorithm’s computational complexity. Finally, we perform experiments that show near-perfect reconstructed IPT results when using an appropriate dissimilarity function. I

    Hough transform generated strong image hashing scheme for copy detection

    Get PDF
    The rapid development of image editing software has resulted in widespread unauthorized duplication of original images. This has given rise to the need to develop robust image hashing technique which can easily identify duplicate copies of the original images apart from differentiating it from different images. In this paper, we have proposed an image hashing technique based on discrete wavelet transform and Hough transform, which is robust to large number of image processing attacks including shifting and shearing. The input image is initially pre-processed to remove any kind of minor effects. Discrete wavelet transform is then applied to the pre-processed image to produce different wavelet coefficients from which different edges are detected by using a canny edge detector. Hough transform is finally applied to the edge-detected image to generate an image hash which is used for image identification. Different experiments were conducted to show that the proposed hashing technique has better robustness and discrimination performance as compared to the state-of-the-art techniques. Normalized average mean value difference is also calculated to show the performance of the proposed technique towards various image processing attacks. The proposed copy detection scheme can perform copy detection over large databases and can be considered to be a prototype for developing online real-time copy detection system

    A Review of Hashing based Image Copy Detection Techniques

    Get PDF
    Images are considered to be natural carriers of information, and a large number of images are created, exchanged and are made available online. Apart from creating new images, the availability of number of duplicate copies of images is a critical problem. Hashing based image copy detection techniques are a promising alternative to address this problem. In this approach, a hash is constructed by using a set of unique features extracted from the image for identification. This article provides a comprehensive review of the state-of-the-art image hashing techniques. The reviewed techniques are categorized by the mechanism used and compared across a set of functional & performance parameters. The article finally highlights the current issues faced by such systems and possible future directions to motivate further research work

    Geometric Distortion-Resilient Image Hashing Scheme and Its Applications on Copy Detection and Authentication

    No full text
    [[sponsorship]]資訊科學研究所,資訊科技創新研究中心[[note]]已出版;有審查制度;具代表

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    A picture is worth a thousand words : content-based image retrieval techniques

    Get PDF
    In my dissertation I investigate techniques for improving the state of the art in content-based image retrieval. To place my work into context, I highlight the current trends and challenges in my field by analyzing over 200 recent articles. Next, I propose a novel paradigm called __artificial imagination__, which gives the retrieval system the power to imagine and think along with the user in terms of what she is looking for. I then introduce a new user interface for visualizing and exploring image collections, empowering the user to navigate large collections based on her own needs and preferences, while simultaneously providing her with an accurate sense of what the database has to offer. In the later chapters I present work dealing with millions of images and focus in particular on high-performance techniques that minimize memory and computational use for both near-duplicate image detection and web search. Finally, I show early work on a scene completion-based image retrieval engine, which synthesizes realistic imagery that matches what the user has in mind.LEI Universiteit LeidenNWOImagin
    corecore