49 research outputs found

    Protection of Relational Databases by Means of Watermarking: Recent Advances and Challenges

    Get PDF
    Databases represent today great economical and strategic concerns for both enterprises and public institutions. In that context, where data leaks, robbery as well as innocent or even hostile data degradation represent a real danger, and watermarking appears as an interesting tool. Watermarking is based on the imperceptible embedding of a message or watermark into a database in order, for instance, to determine its origin as well as to detect if it has been modified. A major advantage of watermarking in relation to other digital content protection mechanisms is that it leaves access to the data while keeping them protected by means of a watermark, independent of the data format storage. Nevertheless, it is necessary to ensure that the introduced distortion does not perturb the exploitation of the database. In this chapter, we give a general overview of the latest database watermarking methods, focusing on those dealing with distortion control. In particular, we present a recent technique based on an ontological modeling of the database semantics that represent the relationships in between attributes—relationships that should be preserved in order to avoid the appearance of incoherent and unlikely records

    Topology-preserving watermarking of vector graphics

    Get PDF
    Watermarking techniques for vector graphics dislocate vertices in order to embed imperceptible, yet detectable, statistical features into the input data. The embedding process may result in a change of the topology of the input data, e.g., by introducing self-intersections, which is undesirable or even disastrous for many applications. In this paper we present a watermarking framework for two-dimensional vector graphics that employs conventional watermarking techniques but still provides the guarantee that the topology of the input data is preserved. The geometric part of this framework computes so-called maximum perturbation regions (MPR) of vertices. We propose two efficient algorithms to compute MPRs based on Voronoi diagrams and constrained triangulations. Furthermore, we present two algorithms to conditionally correct the watermarked data in order to increase the watermark embedding capacity and still guarantee topological correctness. While we focus on the watermarking of input formed by straight-line segments, one of our approaches can also be extended to circular arcs. We conclude the paper by demonstrating and analyzing the applicability of our framework in conjunction with two well-known watermarking techniques

    Digital watermarking, information embedding, and data hiding systems

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 139-142).Digital watermarking, information embedding, and data hiding systems embed information, sometimes called a digital watermark, inside a host signal, which is typically an image, audio signal, or video signal. The host signal is not degraded unacceptably in the process, and one can recover the watermark even if the composite host and watermark signal undergo a variety of corruptions and attacks as long as these corruptions do not unacceptably degrade the host signal. These systems play an important role in meeting at least three major challenges that result from the widespread use of digital communication networks to disseminate multimedia content: (1) the relative ease with which one can generate perfect copies of digital signals creates a need for copyright protection mechanisms, (2) the relative ease with which one can alter digital signals creates a need for authentication and tamper-detection methods, and (3) the increase in sheer volume of transmitted data creates a demand for bandwidth-efficient methods to either backwards-compatibly increase capacities of existing legacy networks or deploy new networks backwards-compatibly with legacy networks. We introduce a framework within which to design and analyze digital watermarking and information embedding systems. In this framework performance is characterized by achievable rate-distortion-robustness trade-offs, and this framework leads quite naturally to a new class of embedding methods called quantization index modulation (QIM). These QIM methods, especially when combined with postprocessing called distortion compensation, achieve provably better rate-distortion-robustness performance than previously proposed classes of methods such as spread spectrum methods and generalized low-bit modulation methods in a number of different scenarios, which include both intentional and unintentional attacks. Indeed, we show that distortion-compensated QIM methods can achieve capacity, the information-theoretically best possible rate-distortion-robustness performance, against both additive Gaussian noise attacks and arbitrary squared error distortion-constrained attacks. These results also have implications for the problem of communicating over broadcast channels. We also present practical implementations of QIM methods called dither modulation and demonstrate their performance both analytically and through empirical simulations.by Brian Chen.Ph.D

    MĂ©thodes de tatouage robuste pour la protection de l imagerie numerique 3D

    Get PDF
    La multiplication des contenus stéréoscopique augmente les risques de piratage numérique. La solution technologique par tatouage relève ce défi. En pratique, le défi d une approche de tatouage est d'atteindre l équilibre fonctionnel entre la transparence, la robustesse, la quantité d information insérée et le coût de calcul. Tandis que la capture et l'affichage du contenu 3D ne sont fondées que sur les deux vues gauche/droite, des représentations alternatives, comme les cartes de disparité devrait également être envisagée lors de la transmission/stockage. Une étude spécifique sur le domaine d insertion optimale devient alors nécessaire. Cette thèse aborde les défis mentionnés ci-dessus. Tout d'abord, une nouvelle carte de disparité (3D video-New Three Step Search- 3DV-SNSL) est développée. Les performances des 3DV-NTSS ont été évaluées en termes de qualité visuelle de l'image reconstruite et coût de calcul. En comparaison avec l'état de l'art (NTSS et FS-MPEG) des gains moyens de 2dB en PSNR et 0,1 en SSIM sont obtenus. Le coût de calcul est réduit par un facteur moyen entre 1,3 et 13. Deuxièmement, une étude comparative sur les principales classes héritées des méthodes de tatouage 2D et de leurs domaines d'insertion optimales connexes est effectuée. Quatre méthodes d'insertion appartenant aux familles SS, SI et hybride (Fast-IProtect) sont considérées. Les expériences ont mis en évidence que Fast-IProtect effectué dans la nouvelle carte de disparité (3DV-NTSS) serait suffisamment générique afin de servir une grande variété d'applications. La pertinence statistique des résultats est donnée par les limites de confiance de 95% et leurs erreurs relatives inférieurs er <0.1The explosion in stereoscopic video distribution increases the concerns over its copyright protection. Watermarking can be considered as the most flexible property right protection technology. The watermarking applicative issue is to reach the trade-off between the properties of transparency, robustness, data payload and computational cost. While the capturing and displaying of the 3D content are solely based on the two left/right views, some alternative representations, like the disparity maps should also be considered during transmission/storage. A specific study on the optimal (with respect to the above-mentioned properties) insertion domain is also required. The present thesis tackles the above-mentioned challenges. First, a new disparity map (3D video-New Three Step Search - 3DV-NTSS) is designed. The performances of the 3DV-NTSS were evaluated in terms of visual quality of the reconstructed image and computational cost. When compared with state of the art methods (NTSS and FS-MPEG) average gains of 2dB in PSNR and 0.1 in SSIM are obtained. The computational cost is reduced by average factors between 1.3 and 13. Second, a comparative study on the main classes of 2D inherited watermarking methods and on their related optimal insertion domains is carried out. Four insertion methods are considered; they belong to the SS, SI and hybrid (Fast-IProtect) families. The experiments brought to light that the Fast-IProtect performed in the new disparity map domain (3DV-NTSS) would be generic enough so as to serve a large variety of applications. The statistical relevance of the results is given by the 95% confidence limits and their underlying relative errors lower than er<0.1EVRY-INT (912282302) / SudocSudocFranceF

    Steganalysis of video sequences using collusion sensitivity

    Get PDF
    In this thesis we present an effective steganalysis technique for digital video sequences based on the collusion attack. Steganalysis is the process of detecting with a high probability the presence of covert data in multimedia. Existing algorithms for steganalysis target detecting covert information in still images. When applied directly to video sequences these approaches are suboptimal. In this thesis we present methods that overcome this limitation by using redundant information present in the temporal domain to detect covert messages in the form of Gaussian watermarks. In particular we target the spread spectrum steganography method because of its widespread use. Our gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking and more sophisticated pattern recognition tools. Through analysis and simulations we, evaluate the effectiveness of the video steganalysis method based on averaging based collusion scheme. Other forms of collusion attack in the form of weighted linear collusion and block-based collusion schemes have been proposed to improve the detection performance. The proposed steganalsyis methods were successful in detecting hidden watermarks bearing low SNR with high accuracy. The simulation results also show the improved performance of the proposed temporal based methods over the spatial methods. We conclude that the essence of future video steganalysis techniques lies in the exploitation of the temporal redundancy

    Human-Centric Deep Generative Models: The Blessing and The Curse

    Get PDF
    Over the past years, deep neural networks have achieved significant progress in a wide range of real-world applications. In particular, my research puts a focused lens in deep generative models, a neural network solution that proves effective in visual (re)creation. But is generative modeling a niche topic that should be researched on its own? My answer is critically no. In the thesis, I present the two sides of deep generative models, their blessing and their curse to human beings. Regarding what can deep generative models do for us, I demonstrate the improvement in performance and steerability of visual (re)creation. Regarding what can we do for deep generative models, my answer is to mitigate the security concerns of DeepFakes and improve minority inclusion of deep generative models. For the performance of deep generative models, I probe on applying attention modules and dual contrastive loss to generative adversarial networks (GANs), which pushes photorealistic image generation to a new state of the art. For the steerability, I introduce Texture Mixer, a simple yet effective approach to achieve steerable texture synthesis and blending. For the security, my research spans over a series of GAN fingerprinting solutions that enable the detection and attribution of GAN-generated image misuse. For the inclusion, I investigate the biased misbehavior of generative models and present my solution in enhancing the minority inclusion of GAN models over underrepresented image attributes. All in all, I propose to project actionable insights to the applications of deep generative models, and finally contribute to human-generator interaction
    corecore