53 research outputs found

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    Estimating Watermarking Capacity in Gray Scale Images Based on Image Complexity

    Get PDF
    Capacity is one of the most important parameters in image watermarking. Different works have been done on this subject with different assumptions on image and communication channel. However, there is not a global agreement to estimate watermarking capacity. In this paper, we suggest a method to find the capacity of images based on their complexities. We propose a new method to estimate image complexity based on the concept of Region Of Interest (ROI). Our experiments on 2000 images showed that the proposed measure has the best adoption with watermarking capacity in comparison with other complexity measures. In addition, we propose a new method to calculate capacity using proposed image complexity measure. Our proposed capacity estimation method shows better robustness and image quality in comparison with recent works in this field

    Fast watermarking of MPEG-1/2 streams using compressed-domain perceptual embedding and a generalized correlator detector

    Get PDF
    A novel technique is proposed for watermarking of MPEG-1 and MPEG-2 compressed video streams. The proposed scheme is applied directly in the domain of MPEG-1 system streams and MPEG-2 program streams (multiplexed streams). Perceptual models are used during the embedding process in order to avoid degradation of the video quality. The watermark is detected without the use of the original video sequence. A modified correlation-based detector is introduced that applies nonlinear preprocessing before correlation. Experimental evaluation demonstrates that the proposed scheme is able to withstand several common attacks. The resulting watermarking system is very fast and therefore suitable for copyright protection of compressed video

    Geometry-based spherical JND modeling for 360^\circ display

    Full text link
    360^\circ videos have received widespread attention due to its realistic and immersive experiences for users. To date, how to accurately model the user perceptions on 360^\circ display is still a challenging issue. In this paper, we exploit the visual characteristics of 360^\circ projection and display and extend the popular just noticeable difference (JND) model to spherical JND (SJND). First, we propose a quantitative 2D-JND model by jointly considering spatial contrast sensitivity, luminance adaptation and texture masking effect. In particular, our model introduces an entropy-based region classification and utilizes different parameters for different types of regions for better modeling performance. Second, we extend our 2D-JND model to SJND by jointly exploiting latitude projection and field of view during 360^\circ display. With this operation, SJND reflects both the characteristics of human vision system and the 360^\circ display. Third, our SJND model is more consistent with user perceptions during subjective test and also shows more tolerance in distortions with fewer bit rates during 360^\circ video compression. To further examine the effectiveness of our SJND model, we embed it in Versatile Video Coding (VVC) compression. Compared with the state-of-the-arts, our SJND-VVC framework significantly reduced the bit rate with negligible loss in visual quality

    Global motion compensated visual attention-based video watermarking

    Get PDF
    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking

    DCT-Based Image Feature Extraction and Its Application in Image Self-Recovery and Image Watermarking

    Get PDF
    Feature extraction is a critical element in the design of image self-recovery and watermarking algorithms and its quality can have a big influence on the performance of these processes. The objective of the work presented in this thesis is to develop an effective methodology for feature extraction in the discrete cosine transform (DCT) domain and apply it in the design of adaptive image self-recovery and image watermarking algorithms. The methodology is to use the most significant DCT coefficients that can be at any frequency range to detect and to classify gray level patterns. In this way, gray level variations with a wider range of spatial frequencies can be looked into without increasing computational complexity and the methodology is able to distinguish gray level patterns rather than the orientations of simple edges only as in many existing DCT-based methods. The proposed image self-recovery algorithm uses the developed feature extraction methodology to detect and classify blocks that contain significant gray level variations. According to the profile of each block, the critical frequency components representing the specific gray level pattern of the block are chosen for encoding. The code lengths are made variable depending on the importance of these components in defining the block’s features, which makes the encoding of critical frequency components more precise, while keeping the total length of the reference code short. The proposed image self-recovery algorithm has resulted in remarkably shorter reference codes that are only 1/5 to 3/5 of those produced by existing methods, and consequently a superior visual quality in the embedded images. As the shorter codes contain the critical image information, the proposed algorithm has also achieved above average reconstruction quality for various tampering rates. The proposed image watermarking algorithm is computationally simple and designed for the blind extraction of the watermark. The principle of the algorithm is to embed the watermark in the locations where image data alterations are the least visible. To this end, the properties of the HVS are used to identify the gray level image features of such locations. The characteristics of the frequency components representing these features are identifying by applying the DCT-based feature extraction methodology developed in this thesis. The strength with which the watermark is embedded is made adaptive to the local gray level characteristics. Simulation results have shown that the proposed watermarking algorithm results in significantly higher visual quality in the watermarked images than that of the reported methods with a difference in PSNR of about 2.7 dB, while the embedded watermark is highly robustness against JPEG compression even at low quality factors and to some other common image processes. The good performance of the proposed image self-recovery and watermarking algorithms is an indication of the effectiveness of the developed feature extraction methodology. This methodology can be applied in a wide range of applications and it is suitable for any process where the DCT data is available

    Rapid intelligent watermarking system for high-resolution grayscale facial images

    Get PDF
    Facial captures are widely used in many access control applications to authenticate individuals, and grant access to protected information and locations. For instance, in passport or smart card applications, facial images must be secured during the enrollment process, prior to exchange and storage. Digital watermarking may be used to assure integrity and authenticity of these facial images against unauthorized manipulations, through fragile and robust watermarking, respectively. It can also combine other biometric traits to be embedded as invisible watermarks in these facial captures to improve individual verification. Evolutionary Computation (EC) techniques have been proposed to optimize watermark embedding parameters in IntelligentWatermarking (IW) literature. The goal of such optimization problem is to find the trade-off between conflicting objectives of watermark quality and robustness. Securing streams of high-resolution biometric facial captures results in a large number of optimization problems of high dimension search space. For homogeneous image streams, the optimal solutions for one image block can be utilized for other image blocks having the same texture features. Therefore, the computational complexity for handling a stream of high-resolution facial captures is significantly reduced by recalling such solutions from an associative memory instead of re-optimizing the whole facial capture image. In this thesis, an associative memory is proposed to store the previously calculated solutions for different categories of texture using the optimization results of the whole image for few training facial images. A multi-hypothesis approach is adopted to store in the associative memory the solutions for different clustering resolutions (number of blocks clusters based on texture features), and finally select the optimal clustering resolution based on the watermarking metrics for each facial image during generalization. This approach was verified using streams of facial captures from PUT database (Kasinski et al., 2008). It was compared against a baseline system representing traditional IW methods with full optimization for all stream images. Both proposed and baseline systems are compared with respect to quality of solution produced and the computational complexity measured in fitness evaluations. The proposed approach resulted in a decrease of 95.5% in computational burden with little impact in watermarking performance for a stream of 198 facial images. The proposed framework Blockwise Multi-Resolution Clustering (BMRC) has been published in Machine Vision and Applications (Rabil et al., 2013a) Although the stream of high dimensionality optimization problems are replaced by few training optimizations, and then recalls from an associative memory storing the training artifacts. Optimization problems with high dimensionality search space are challenging, complex, and can reach up to dimensionality of 49k variables represented using 293k bits for high-resolution facial images. In this thesis, this large dimensionality problem is decomposed into smaller problems representing image blocks which resolves convergence problems with handling the larger problem. Local watermarking metrics are used in cooperative coevolution on block level to reach the overall solution. The elitism mechanism is modified such that the blocks of higher local watermarking metrics are fetched across all candidate solutions for each position, and concatenated together to form the elite candidate solutions. This proposed approach resulted in resolving premature convergence for traditional EC methods, and thus 17% improvement on the watermarking fitness is accomplished for facial images of resolution 2048×1536. This improved fitness is achieved using few iterations implying optimization speedup. The proposed algorithm Blockwise Coevolutionary Genetic Algorithm (BCGA) has been published in Expert Systems with Applications (Rabil et al., 2013c). The concepts and frameworks presented in this thesis can be generalized on any stream of optimization problems with large search space, where the candidate solutions consist of smaller granularity problems solutions that affect the overall solution. The challenge for applying this approach is finding the significant feature for this smaller granularity that affects the overall optimization problem. In this thesis the texture features of smaller granularity blocks represented in the candidate solutions are affecting the watermarking fitness optimization of the whole image. Also the local metrics of these smaller granularity problems are indicating the fitness produced for the larger problem. Another proposed application for this thesis is to embed offline signature features as invisible watermark embedded in facial captures in passports to be used for individual verification during border crossing. The offline signature is captured from forms signed at borders and verified against the embedded features. The individual verification relies on one physical biometric trait represented by facial captures and another behavioral trait represented by offline signature

    Visual attention-based image watermarking

    Get PDF
    Imperceptibility and robustness are two complementary but fundamental requirements of any watermarking algorithm. Low strength watermarking yields high imperceptibility but exhibits poor robustness. High strength watermarking schemes achieve good robustness but often infuse distortions resulting in poor visual quality in host media. If distortion due to high strength watermarking can avoid visually attentive regions, such distortions are unlikely to be noticeable to any viewer. In this paper, we exploit this concept and propose a novel visual attention-based highly robust image watermarking methodology by embedding lower and higher strength watermarks in visually salient and non-salient regions, respectively. A new low complexity wavelet domain visual attention model is proposed that allows us to design new robust watermarking algorithms. The proposed new saliency model outperforms the state-of-the-art method in joint saliency detection and low computational complexity performances. In evaluating watermarking performances, the proposed blind and non-blind algorithms exhibit increased robustness to various natural image processing and filtering attacks with minimal or no effect on image quality, as verified by both subjective and objective visual quality evaluation. Up to 25% and 40% improvement against JPEG2000 compression and common filtering attacks, respectively, are reported against the existing algorithms that do not use a visual attention model

    Video Quality Metrics

    Get PDF
    corecore