34 research outputs found

    Image watermarking based on the space/spatial-frequency analysis and Hermite functions expansion

    Get PDF
    International audienceAn image watermarking scheme that combines Hermite functions expansion and space/spatial-frequency analysis is proposed. In the first step, the Hermite functions expansion is employed to select busy regions for watermark embedding. In the second step, the space/spatial-frequency representation and Hermite functions expansion are combined to design the imperceptible watermark, using the host local frequency content. The Hermite expansion has been done by using the fast Hermite projection method. Recursive realization of Hermite functions significantly speeds up the algorithms for regions selection and watermark design. The watermark detection is performed within the space/spatial-frequency domain. The detection performance is increased due to the high information redundancy in that domain in comparison with the space or frequency domains, respectively. The performance of the proposed procedure has been tested experimentally for different watermark strengths, i.e., for different values of the peak signal-to-noise ratio (PSNR). The proposed approach provides high detection performance even for high PSNR values. It offers a good compromise between detection performance (including the robustness to a wide variety of common attacks) and imperceptibility

    Audio, Text, Image, and Video Digital Watermarking Techniques for Security of Media Digital

    Get PDF
    The proliferation of multimedia content as digital media assets, encompassing audio, text, images, and video, has led to increased risks of unauthorized usage and copyright infringement. Online piracy serves as a prominent example of such misuse. To address these challenges, watermarking techniques have been developed to protect the copyright of digital media while maintaining the integrity of the underlying content. Key characteristics evaluated in watermarking methods include capability, privacy, toughness, and invisibility, with robustness playing a crucial role. This paper presents a comparative analysis of digital watermarking methods, highlighting the superior security and effective watermark image recovery offered by singular value decomposition. The research community has shown significant interest in watermarking, resulting in the development of various methods in both the spatial and transform domains. Transform domain approaches such as Discrete Cosine Transform, Discrete Wavelet Transform, and Singular Value Decomposition, along with their interconnections, have been explored to enhance the effectiveness of digital watermarking methods

    A Robust Algorithm of Digital Image Watermarking Based on Discrete Wavelet Transform

    Get PDF
    In this paper, a robust algorithm of digital image watermarking based on discrete wavelet transform is introduced It uses blind watermarking technique. Digital image watermarking is one such technology that has been developed to protect digital images from illegal manipulations. In particular, digital image watermarking algorithms which are based on the discrete wavelet transform have been widely recognized to be more prevalent than others. This is due to the wavelets\u27 excellent spatial localization, frequency spread, and multi-resolution characteristics, which are similar to the theoretical models of the human visual system

    Wavelet Domain Watermark Detection and Extraction using the Vector-based Hidden Markov Model

    Get PDF
    Multimedia data piracy is a growing problem in view of the ease and simplicity provided by the internet in transmitting and receiving such data. A possible solution to preclude unauthorized duplication or distribution of digital data is watermarking. Watermarking is an identifiable piece of information that provides security against multimedia piracy. This thesis is concerned with the investigation of various image watermarking schemes in the wavelet domain using the statistical properties of the wavelet coefficients. The wavelet subband coefficients of natural images have significantly non-Gaussian and heavy-tailed features that are best described by heavy-tailed distributions. Moreover the wavelet coefficients of images have strong inter-scale and inter-orientation dependencies. In view of this, the vector-based hidden Markov model is found to be best suited to characterize the wavelet coefficients. In this thesis, this model is used to develop new digital image watermarking schemes. Additive and multiplicative watermarking schemes in the wavelet domain are developed in order to provide improved detection and extraction of the watermark. Blind watermark detectors using log-likelihood ratio test, and watermark decoders using the maximum likelihood criterion to blindly extract the embedded watermark bits from the observation data are designed. Extensive experiments are conducted throughout this thesis using a number of databases selected from a wide variety of natural images. Simulation results are presented to demonstrate the effectiveness of the proposed image watermarking scheme and their superiority over some of the state-of-the-art techniques. It is shown that in view of the use of the hidden Markov model characterize the distributions of the wavelet coefficients of images, the proposed watermarking algorithms result in higher detection and decoding rates both before and after subjecting the watermarked image to various kinds of attacks

    Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes

    Get PDF
    Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity. In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis. As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods. The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms. Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4. In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications. Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh. By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms. Chapter 9 concludes this thesis and also suggests some potential directions for future work

    Moment tabanlı normalleştirmeye dayalı sayısal görüntü damgalama yöntemleri

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Bu çalışmada Moment Tabanlı Görüntü Normalleştirme kullanılarak iki boyutlu ayrık dalgacık ve karmaşık dalgacık dönüşüm uzaylarında dayanıklı sayısal görüntü damgalama algoritmaları geliştirilmiştir. Önerilen damgalama algoritmalarında normalleştirme işlemi geometrik bozunumlara karşı dayanıklılığı sağlarken, damganın dalgacık uzayında eklenmesi gürültü, doğrusal ve doğrusal olmayan filtreleme, JPEG sıkıştırması gibi saldırılara karşı damganın dayanıklılığını arttırmıştır. İnsan görme sisteminin özellikleri göz önünde bulundurularak, eklenen damganın hem algısal saydamlık hem de dayanıklılık gereksinimlerini aynı anda sağlaması başarılmıştır. Sunulan yöntem literatürde sıklıkla kullanılan iki yöntemle karşılaştırılmıştır. Simülasyon sonuçları, önerilen yöntemin, JPEG ve JPEG2000 sıkıştırması, çeşitli geometrik dönüşümler ve bazı görüntü işleme saldırılarına karşı her iki yöntemden de daha iyi sonuçlar verdiğini göstermiştir. Daha sonra, normalleştirmenin damgalama kapasitesi üzerindeki etkisi Moulin ve Mıhçak tarafından önerilen bilgi-teorisi tabanlı kapasite kestirim yöntemi kullanılarak ayrık kosinüs ve dalgacık uzaylarında araştırılmıştır. Kapasite analizi sonuçları, görüntünün dönüşümündeki sıfır değerli katsayıların sayısının kapasiteyi belirlediğini göstermiştir. Normalleştirme işlemi bir görüntünün dönüşümündeki sıfır değerli katsayı sayısını arttırdığından, damgalama algoritmalarında ön işlem olarak kullanıldığında daha iyi kapasite kestirim sonucu vermektedir. Bir görüntünün dalgacık dönüşümündeki sıfır değerli katsayı sayısı DCT dönüşümündekinden daha fazla olduğundan kapasite önemli olduğunda dalgacık dönüşümünün tercih edilmesi gereklidir.In this study, robust digital image watermarking algorithms in two dimensional discrete wavelet and complex wavelet domains were developed by using the moment based image normalization. In the proposed methods, while the normalization provides robustness against geometrical distortions, the fact that watermark is added in the wavelet domain achieves immunity for attacks such a noise, linear and non-linear filtering and JPEG compression. That the watermark meets transparency and robustness requirements simultaneously was obtained by taking the properties of the human visual system into account. The proposed method was compared to two commonly used digital image watermarking algorithms. Simulation results have shown that the proposed method gives better results that both of the commonly used algorithms in term of various geometrical distortions and several image processing attacks. Then, the effects of the normalization on watermarking capacity in discrete cosine and wavelet domains were investigated by using the information theory based capacity estimation method developed by Moulin and Mıhçak. The results of the capacity analysis have demonstrated that transform coefficients sparsity of an image determines the capacity. Since the normalization process increases transform coefficients sparsity of an image , it results in better capacity estimates when it is used as a preprocessing step watermarking algorithms. As the wavelet models capture sparsity better than the DCT models, wavelet transform must be preferred when capacity is the main concern

    Probabilistic modeling of wavelet coefficients for processing of image and video signals

    Get PDF
    Statistical estimation and detection techniques are widely used in signal processing including wavelet-based image and video processing. The probability density function (PDF) of the wavelet coefficients of image and video signals plays a key role in the development of techniques for such a processing. Due to the fixed number of parameters, the conventional PDFs for the estimators and detectors usually ignore higher-order moments. Consequently, estimators and detectors designed using such PDFs do not provide a satisfactory performance. This thesis is concerned with first developing a probabilistic model that is capable of incorporating an appropriate number of parameters that depend on higher-order moments of the wavelet coefficients. This model is then used as the prior to propose certain estimation and detection techniques for denoising and watermarking of image and video signals. Towards developing the probabilistic model, the Gauss-Hermite series expansion is chosen, since the wavelet coefficients have non-compact support and their empirical density function shows a resemblance to the standard Gaussian function. A modification is introduced in the series expansion so that only a finite number of terms can be used for modeling the wavelet coefficients with rendering the resulting PDF to become negative. The parameters of the resulting PDF, called the modified Gauss-Hermite (NIGH) PDF, are evaluated in terms of the higher-order sample-moments. It is shown that the MGH PDF fits the empirical density function better than the existing PDFs that use a limited number of parameters do. The proposed MGH PDF is used as the prior of image and video signals in designing maximum a posteriori and minimum mean squared error-based estimators for denoising of image and video signals and log-likelihood ratio-based detector for watermarking of image signals. The performance of the estimation and detection techniques are then evaluated in terms of the commonly used metrics. It is shown through extensive experimentations that the estimation and detection techniques developed utilizing the proposed MGH PDF perform substantially better than those that utilize the conventional PDFs. These results confirm that the superior fit of the MGH PDF to the empirical density function resulting from the flexibility of the MGH PDF in choosing the number of parameters, which are functions of higher-order moments of data, leads to the better performance. Thus, the proposed MGH PDF should play a significant role in wavelet-based image and video signal processin
    corecore