43 research outputs found

    A robust image watermarking technique based on quantization noise visibility thresholds

    Get PDF
    International audienceA tremendous amount of digital multimedia data is broadcasted daily over the internet. Since digital data can be very quickly and easily duplicated, intellectual property right protection techniques have become important and first appeared about fifty years ago (see [I.J. Cox, M.L. Miller, The First 50 Years of Electronic Watermarking, EURASIP J. Appl. Signal Process. 2 (2002) 126-132. [52]] for an extended review). Digital watermarking was born. Since its inception, many watermarking techniques have appeared, in all possible transformed spaces. However, an important lack in watermarking literature concerns the human visual system models. Several human visual system (HVS) model based watermarking techniques were designed in the late 1990's. Due to the weak robustness results, especially concerning geometrical distortions, the interest in such studies has reduced. In this paper, we intend to take advantage of recent advances in HVS models and watermarking techniques to revisit this issue. We will demonstrate that it is possible to resist too many attacks, including geometrical distortions, in HVS based watermarking algorithms. The perceptual model used here takes into account advanced features of the HVS identified from psychophysics experiments conducted in our laboratory. This model has been successfully applied in quality assessment and image coding schemes M. Carnec, P. Le Callet, D. Barba, An image quality assessment method based on perception of structural information, IEEE Internat. Conf. Image Process. 3 (2003) 185-188, N. Bekkat, A. Saadane, D. Barba, Masking effects in the quality assessment of coded images, in: SPIE Human Vision and Electronic Imaging V, 3959 (2000) 211-219. In this paper the human visual system model is used to create a perceptual mask in order to optimize the watermark strength. The optimal watermark obtained satisfies both invisibility and robustness requirements. Contrary to most watermarking schemes using advanced perceptual masks, in order to best thwart the de-synchronization problem induced by geometrical distortions, we propose here a Fourier domain embedding and detection technique optimizing the amplitude of the watermark. Finally, the robustness of the scheme obtained is assessed against all attacks provided by the Stirmark benchmark. This work proposes a new digital rights management technique using an advanced human visual system model that is able to resist various kind of attacks including many geometrical distortions

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    Application of Discrete Wavelet Transform in Watermarking

    Get PDF

    Image adaptive watermarking using wavelet transform

    Get PDF
    The availability of versatile multimedia processing software and the far-reaching coverage of the interconnected networks have facilitated flawless copying, manipulations and distribution of the digital multimedia (digital video, audio, text, and images). The ever-advancing storage and retrieval technologies have also smoothed the way for large-scale multimedia database applications. However, abuses of these facilities and technologies pose pressing threats to multimedia security management in general, and multimedia copyright protection and content integrity verification in particular. Although cryptography has a long history of application to information and multimedia security, the undesirable characteristic of providing no protection to the media once decrypted has limited the feasibility of its widespread use. For example, an adversary can obtain the decryption key by purchasing a legal copy of the media but then redistribute the decrypted copies of the original. In response to these challenges; digital watermarking techniques have been proposed in the last decade. Digital watermarking is the procedure whereby secret information (the watermark) is embedded into the host multimedia content, such that it is: (1) hidden, i.e., not perceptually visible; and (2) recoverable, even after the content is degraded by different attacks such as filtering, JPEG compression, noise, cropping etc. The two basic requirements for an effective watermarking scheme, imperceptibility and robustness, conflict with each other. The main focus of this thesis is to provide good tradeoff between perceptual quality of the watermarked image and its robustness against different attacks. For this purpose, we have discussed two robust digital watermarking techniques in discrete wavelet (DWT) domain. One is fusion based watermarking, and other is spread spectrum based watermarking. Both the techniques are image adaptive and employ a contrast sensitivity based human visual system (HVS) model. The HVS models give us a direct way to determine the maximum strength of watermark signal that each portion of an image can tolerate without affecting the visual quality of the image. In fusion based watermarking technique, grayscale image (logo) is used as watermark. In watermark embedding process, both the host image and watermark image are transformed into DWT domain where their coefficients are fused according to a series combination rule that take into account contrast sensitivity characteristics of the HVS. The method repeatedly merges the watermark coefficients strongly in more salient components at the various resolution levels of the host image which provides simultaneous spatial localization and frequency spread of the watermark to provide robustness against different attacks. Watermark extraction process requires original image for watermark extraction. In spread spectrum based watermarking technique, a visually recognizable binary image is used as watermark. In watermark embedding process, the host image is transformed into DWT domain. By utilizing contrast sensitivity based HVS model, watermark bits are adaptively embedded through a pseudo-noise sequence into the middle frequency sub-bands to provide robustness against different attacks. No original image is required for watermark extraction. Simulation results of various attacks are also presented to demonstrate the robustness of both the algorithms. Simulation results verify theoretical observations and demonstrate the feasibility of the digital watermarking algorithms for use in multimedia standards

    Digital watermarking in medical images

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 05/12/2005.This thesis addresses authenticity and integrity of medical images using watermarking. Hospital Information Systems (HIS), Radiology Information Systems (RIS) and Picture Archiving and Communication Systems (P ACS) now form the information infrastructure for today's healthcare as these provide new ways to store, access and distribute medical data that also involve some security risk. Watermarking can be seen as an additional tool for security measures. As the medical tradition is very strict with the quality of biomedical images, the watermarking method must be reversible or if not, region of Interest (ROI) needs to be defined and left intact. Watermarking should also serve as an integrity control and should be able to authenticate the medical image. Three watermarking techniques were proposed. First, Strict Authentication Watermarking (SAW) embeds the digital signature of the image in the ROI and the image can be reverted back to its original value bit by bit if required. Second, Strict Authentication Watermarking with JPEG Compression (SAW-JPEG) uses the same principal as SAW, but is able to survive some degree of JPEG compression. Third, Authentication Watermarking with Tamper Detection and Recovery (AW-TDR) is able to localise tampering, whilst simultaneously reconstructing the original image

    A Novel HVS-based Watermarking Scheme in CT Domain

    Get PDF
    In this paper, a novel watermarking technique in contourlet transform (CT) domain is presented. The proposed algorithm takes advantage of a multiscale framework and multi- directionality to extract the significant frequency, luminance and texture component in an image. Unlike the conventional methods in the contourlet domain, mask function is accomplished pixel by pixel by taking into account the frequency, the luminance and the texture content of all the image subbands including the low-pass subband and directional subbands. The adaptive nature of the novel method allows the scheme to be adaptive in terms of the imperceptibility and robustness. The watermark is detected by computing the correlation. Finally, the experimental results demonstrate the imperceptibility and the robustness against standard watermarking attacks

    Data hiding in images based on fractal modulation and diversity combining

    Get PDF
    The current work provides a new data-embedding infrastructure based on fractal modulation. The embedding problem is tackled from a communications point of view. The data to be embedded becomes the signal to be transmitted through a watermark channel. The channel could be the image itself or some manipulation of the image. The image self noise and noise due to attacks are the two sources of noise in this paradigm. At the receiver, the image self noise has to be suppressed, while noise due to the attacks may sometimes be predicted and inverted. The concepts of fractal modulation and deterministic self-similar signals are extended to 2-dimensional images. These novel techniques are used to build a deterministic bi-homogenous watermark signal that embodies the binary data to be embedded. The binary data to be embedded, is repeated and scaled with different amplitudes at each level and is used as the wavelet decomposition pyramid. The binary data is appended with special marking data, which is used during demodulation, to identify and correct unreliable or distorted blocks of wavelet coefficients. This specially constructed pyramid is inverted using the inverse discrete wavelet transform to obtain the self-similar watermark signal. In the data embedding stage, the well-established linear additive technique is used to add the watermark signal to the cover image, to generate the watermarked (stego) image. Data extraction from a potential stego image is done using diversity combining. Neither the original image nor the original binary sequence (or watermark signal) is required during the extraction. A prediction of the original image is obtained using a cross-shaped window and is used to suppress the image self noise in the potential stego image. The resulting signal is then decomposed using the discrete wavelet transform. The number of levels and the wavelet used are the same as those used in the watermark signal generation stage. A thresholding process similar to wavelet de-noising is used to identify whether a particular coefficient is reliable or not. A decision is made as to whether a block is reliable or not based on the marking data present in each block and sometimes corrections are applied to the blocks. Finally the selected blocks are combined based on the diversity combining strategy to extract the embedded binary data

    Towards Optimal Copyright Protection Using Neural Networks Based Digital Image Watermarking

    Get PDF
    In the field of digital watermarking, digital image watermarking for copyright protection has attracted a lot of attention in the research community. Digital watermarking contains varies techniques for protecting the digital content. Among all those techniques,Discrete Wavelet Transform (DWT) provides higher image imperceptibility and robustness. Over the years, researchers have been designing watermarking techniques with robustness in mind, in order for the watermark to be resistant against any image processing techniques. Furthermore, the requirements of a good watermarking technique includes a tradeoff between robustness, image quality (imperceptibility) and capacity. In this paper, we have done an extensive literature review for the existing DWT techniques and those combined with other techniques such as Neural Networks. In addition to that, we have discuss the contribution of Neural Networks in copyright protection. Finally we reached our goal in which we identified the research gaps existed in the current watermarking schemes. So that, it will be easily to obtain an optimal techniques to make the watermark object robust to attacks while maintaining the imperceptibility to enhance the copyright protection

    DCT-Based Image Feature Extraction and Its Application in Image Self-Recovery and Image Watermarking

    Get PDF
    Feature extraction is a critical element in the design of image self-recovery and watermarking algorithms and its quality can have a big influence on the performance of these processes. The objective of the work presented in this thesis is to develop an effective methodology for feature extraction in the discrete cosine transform (DCT) domain and apply it in the design of adaptive image self-recovery and image watermarking algorithms. The methodology is to use the most significant DCT coefficients that can be at any frequency range to detect and to classify gray level patterns. In this way, gray level variations with a wider range of spatial frequencies can be looked into without increasing computational complexity and the methodology is able to distinguish gray level patterns rather than the orientations of simple edges only as in many existing DCT-based methods. The proposed image self-recovery algorithm uses the developed feature extraction methodology to detect and classify blocks that contain significant gray level variations. According to the profile of each block, the critical frequency components representing the specific gray level pattern of the block are chosen for encoding. The code lengths are made variable depending on the importance of these components in defining the block’s features, which makes the encoding of critical frequency components more precise, while keeping the total length of the reference code short. The proposed image self-recovery algorithm has resulted in remarkably shorter reference codes that are only 1/5 to 3/5 of those produced by existing methods, and consequently a superior visual quality in the embedded images. As the shorter codes contain the critical image information, the proposed algorithm has also achieved above average reconstruction quality for various tampering rates. The proposed image watermarking algorithm is computationally simple and designed for the blind extraction of the watermark. The principle of the algorithm is to embed the watermark in the locations where image data alterations are the least visible. To this end, the properties of the HVS are used to identify the gray level image features of such locations. The characteristics of the frequency components representing these features are identifying by applying the DCT-based feature extraction methodology developed in this thesis. The strength with which the watermark is embedded is made adaptive to the local gray level characteristics. Simulation results have shown that the proposed watermarking algorithm results in significantly higher visual quality in the watermarked images than that of the reported methods with a difference in PSNR of about 2.7 dB, while the embedded watermark is highly robustness against JPEG compression even at low quality factors and to some other common image processes. The good performance of the proposed image self-recovery and watermarking algorithms is an indication of the effectiveness of the developed feature extraction methodology. This methodology can be applied in a wide range of applications and it is suitable for any process where the DCT data is available

    Scalable image quality assessment with 2D mel-cepstrum and machine learning approach

    Get PDF
    Cataloged from PDF version of article.Measurement of image quality is of fundamental importance to numerous image and video processing applications. Objective image quality assessment (IQA) is a two-stage process comprising of the following: (a) extraction of important information and discarding the redundant one, (b) pooling the detected features using appropriate weights. These two stages are not easy to tackle due to the complex nature of the human visual system (HVS). In this paper, we first investigate image features based on two-dimensional (20) mel-cepstrum for the purpose of IQA. It is shown that these features are effective since they can represent the structural information, which is crucial for IQA. Moreover, they are also beneficial in a reduced-reference scenario where only partial reference image information is used for quality assessment. We address the second issue by exploiting machine learning. In our opinion, the well established methodology of machine learning/pattern recognition has not been adequately used for IQA so far; we believe that it will be an effective tool for feature pooling since the required weights/parameters can be determined in a more convincing way via training with the ground truth obtained according to subjective scores. This helps to overcome the limitations of the existing pooling methods, which tend to be over simplistic and lack theoretical justification. Therefore, we propose a new metric by formulating IQA as a pattern recognition problem. Extensive experiments conducted using six publicly available image databases (totally 3211 images with diverse distortions) and one video database (with 78 video sequences) demonstrate the effectiveness and efficiency of the proposed metric, in comparison with seven relevant existing metrics. (C) 2011 Elsevier Ltd. All rights reserved
    corecore