169 research outputs found

    A New Digital Watermarking Algorithm Using Combination of Least Significant Bit (LSB) and Inverse Bit

    Full text link
    In this paper, we introduce a new digital watermarking algorithm using least significant bit (LSB). LSB is used because of its little effect on the image. This new algorithm is using LSB by inversing the binary values of the watermark text and shifting the watermark according to the odd or even number of pixel coordinates of image before embedding the watermark. The proposed algorithm is flexible depending on the length of the watermark text. If the length of the watermark text is more than ((MxN)/8)-2 the proposed algorithm will also embed the extra of the watermark text in the second LSB. We compare our proposed algorithm with the 1-LSB algorithm and Lee's algorithm using Peak signal-to-noise ratio (PSNR). This new algorithm improved its quality of the watermarked image. We also attack the watermarked image by using cropping and adding noise and we got good results as well.Comment: 8 pages, 6 figures and 4 tables; Journal of Computing, Volume 3, Issue 4, April 2011, ISSN 2151-961

    Data hiding in images based on fractal modulation and diversity combining

    Get PDF
    The current work provides a new data-embedding infrastructure based on fractal modulation. The embedding problem is tackled from a communications point of view. The data to be embedded becomes the signal to be transmitted through a watermark channel. The channel could be the image itself or some manipulation of the image. The image self noise and noise due to attacks are the two sources of noise in this paradigm. At the receiver, the image self noise has to be suppressed, while noise due to the attacks may sometimes be predicted and inverted. The concepts of fractal modulation and deterministic self-similar signals are extended to 2-dimensional images. These novel techniques are used to build a deterministic bi-homogenous watermark signal that embodies the binary data to be embedded. The binary data to be embedded, is repeated and scaled with different amplitudes at each level and is used as the wavelet decomposition pyramid. The binary data is appended with special marking data, which is used during demodulation, to identify and correct unreliable or distorted blocks of wavelet coefficients. This specially constructed pyramid is inverted using the inverse discrete wavelet transform to obtain the self-similar watermark signal. In the data embedding stage, the well-established linear additive technique is used to add the watermark signal to the cover image, to generate the watermarked (stego) image. Data extraction from a potential stego image is done using diversity combining. Neither the original image nor the original binary sequence (or watermark signal) is required during the extraction. A prediction of the original image is obtained using a cross-shaped window and is used to suppress the image self noise in the potential stego image. The resulting signal is then decomposed using the discrete wavelet transform. The number of levels and the wavelet used are the same as those used in the watermark signal generation stage. A thresholding process similar to wavelet de-noising is used to identify whether a particular coefficient is reliable or not. A decision is made as to whether a block is reliable or not based on the marking data present in each block and sometimes corrections are applied to the blocks. Finally the selected blocks are combined based on the diversity combining strategy to extract the embedded binary data

    A New Watermarking Algorithm Based on Human Visual System for Content Integrity Verification of Region of Interest

    Get PDF
    This paper proposes a semi-fragile, robust-to-JPEG2000 compression watermarking method which is based on the Human Visual System (HVS). This method is designed to verify the content integrity of Region of Interest (ROI) in tele-radiology images. The design of watermarking systems based on HVS leads to the possibility of embedding watermarks in places that are not obvious to the human eye. In this way, notwithstanding increased capacity and robustness, it becomes possible to hide more watermarks. Based on perceptual model of HVS, we propose a new watermarking scheme that embeds the watermarks using a replacement method. Thus, the proposed method not only detects the watermarks but also extracts them. The novelty of our ROI-based method is in the way that we interpret the obtained coefficients of the HVS perceptual model: instead of interpreting these coefficients as weights, we assume them to be embedding locations. In our method, the information to be embedded is extracted from inter-subband statistical relations of ROI. Then, the semi-fragile watermarks are embedded in the obtained places in level 3 of the DWT decomposition of the Region of Background (ROB). The compatibility of the embedded signatures and extracted watermarks is used to verify the content of ROI. Our simulations confirm improved fidelity and robustness

    Human Visual System Models in Digital Image Watermarking

    Get PDF
    In this paper some Human Visual System (HVS) models used in digital image watermarking are presented. Four different HVS models, which exploit various properties of human eye, are described. Two of them operate in transform domain of Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). HVS model in DCT domain consists of Just Noticeable Difference thresholds for corresponding DCT basis functions corrected by luminance sensitivity and self- or neighborhood contrast masking. HVS model in DWT domain is based on different HVS sensitivity in various DWT subbands. The third presented HVS model is composed of contrast thresholds as a function of spatial frequency and eye's eccentricity. We present also a way of combining these three basic models to get better tradeoff between conflicting requirements of digital watermarks. The fourth HVS model is based on noise visibility in an image and is described by so called Noise Visibility Function (NVF). The possible ways of exploiting of the described HVS models in digital image watermarking are also briefly discussed

    A dual watermarking scheme for identity protection

    Get PDF
    A novel dual watermarking scheme with potential applications in identity protection, media integrity maintenance and copyright protection in both electronic and printed media is presented. The proposed watermarking scheme uses the owner’s signature and fingerprint as watermarks through which the ownership and validity of the media can be proven and kept intact. To begin with, the proposed watermarking scheme is implemented on continuous-tone/greyscale images, and later extended to images achieved via multitoning, an advanced version of halftoning-based printing. The proposed watermark embedding is robust and imperceptible. Experimental simulations and evaluations of the proposed method show excellent results from both objective and subjective view-points

    Watermarking Based Image Authentication for Secure Color Image Retrieval in Large Scale Image Databases

    Get PDF
    An important facet of traditional retrieval models is that they retrieve images and videos and consider their content and context reliable. Nevertheless, this consideration is no longer valid since they can be faked for many reasons and at different degrees thanks to powerful multimedia manipulation software. Our goal is to investigate new ways detecting possible fake in social network platforms. In this paper, we propose an approach that assets identification faked images by combining standard content-based image retrieval (CBIR) techniques and watermarking. We have prepared the wartermarked image database of all images using LSB based watermarking. Using gabor features and trained KNN, user is able to retrieve the matching query image. The retrieved image is authenticated by extracting the watermark and matching it again with the test image

    Digital image watermarking techniques

    Get PDF
    The ability to resolve ownership disputes and copyright infringement is difficult in the worldwide digital age. There is an increasing need to develop techniques that protect the owner of digital data. Digital Watermarking is a technique used to embed a known piece of digital data within another piece of digital data. The embedded piece of data acts as a fingerprint for the owner, allowing the protection of copyright, authentication of the data, and tracing of illegal copies. The goal of this thesis is to produce two watermarking tools and compare their effectiveness with that of other watermarking tools. One of the tools uses a spatial watermarking technique, while the other uses a frequency based spread spectrum technique. These represent the two current approaches to digital watermarking. Use of a standard benchmark is necessary to advance the science of digital watermarking. Until recently, there have been no standard metrics for deter mining the effectiveness of a particular watermarking scheme. Several recent papers propose standard procedures and metrics for comparing watermarking techniques. The proposed metrics and test bed imagery are used as the basis for comparison with other watermark techniques. Overall, the most successful techniques model themselves after data communications techniques. In this case, the image is similar to the atmosphere (medium) and the watermark message is the signal communicated through the medium. The spread spectrum technique yields results that in some cases are comparable to commercial watermarking tools. The spatial domain tool as implemented is inadequate for comparison with the commercial tools

    A digital signature and watermarking based authentication system for JPEG2000 images

    Get PDF
    In this thesis, digital signature based authentication system was introduced, which is able to protect JPEG2000 images in different flavors, including fragile authentication and semi-fragile authentication. The fragile authentication is to protect the image at code-stream level, and the semi-fragile is to protect the image at the content level. The semi-fragile can be further classified into lossy and lossless authentication. With lossless authentication, the original image can be recovered after verification. The lossless authentication and the new image compression standard, JPEG2000 is mainly discussed in this thesis

    Image data hiding

    Get PDF
    Image data hiding represents a class of processes used to embed data into cover images. Robustness is one of the basic requirements for image data hiding. In the first part of this dissertation, 2D and 3D interleaving techniques associated with error-correction-code (ECC) are proposed to significantly improve the robustness of hidden data against burst errors. In most cases, the cover image cannot be inverted back to the original image after the hidden data are retrieved. In this dissertation, one novel reversible (lossless) data hiding technique is then introduced. This technique is based on the histogram modification, which can embed a large amount of data while keeping a very high visual quality for all images. The performance is hence better than most existing reversible data hiding algorithms. However, most of the existing lossless data hiding algorithms are fragile in the sense that the hidden data cannot be extracted correctly after compression or small alteration. In the last part of this dissertation, we then propose a novel robust lossless data hiding technique based on patchwork idea and spatial domain pixel modification. This technique does not generate annoying salt-pepper noise at all, which is unavoidable in the other existing robust lossless data hiding algorithm. This technique has been successfully applied to many commonly used images, thus demonstrating its generality
    corecore