16 research outputs found

    Optimal Watermark Embedding and Detection Strategies Under Limited Detection Resources

    Full text link
    An information-theoretic approach is proposed to watermark embedding and detection under limited detector resources. First, we consider the attack-free scenario under which asymptotically optimal decision regions in the Neyman-Pearson sense are proposed, along with the optimal embedding rule. Later, we explore the case of zero-mean i.i.d. Gaussian covertext distribution with unknown variance under the attack-free scenario. For this case, we propose a lower bound on the exponential decay rate of the false-negative probability and prove that the optimal embedding and detecting strategy is superior to the customary linear, additive embedding strategy in the exponential sense. Finally, these results are extended to the case of memoryless attacks and general worst case attacks. Optimal decision regions and embedding rules are offered, and the worst attack channel is identified.Comment: 36 pages, 5 figures. Revised version. Submitted to IEEE Transactions on Information Theor

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Tatouage robuste d’images imprimées

    Get PDF
    Invisible watermarking for ID images printed on plastic card support is a challenging problem that interests the industrial world. In this study, we developed a watermarking algorithm robust to various attacks present in this case. These attacks are mainly related to the print/scan process on the plastic support and the degradations that an ID card can encounter along its lifetime. The watermarking scheme operates in the Fourier domain as this transform has invariance properties against global geometrical transformations. A preventive method consists of pre-processing the host image before the embedding process that reduces the variance of the embeddable vector. A curative method comprises two counterattacks dealing with blurring and color variations. For a false alarm probability of 10⁻⁴, we obtained an average improvement of 22% over the reference method when only preventative method is used. The combination of the preventive and curative methods leads to a detection rate greater than 99%. The detection algorithm takes less than 1 second for a 512×512 image with a conventional computer, which is compatible with the industrial application in question.Le tatouage invisible d’images d’identité imprimées sur un support en plastique est un problème difficile qui intéresse le monde industriel. Dans cette étude, nous avons développé un algorithme de tatouage robuste aux diverses attaques présentes dans ce cas. Ces attaques sont liées aux processus d’impression/numérisation sur le support plastique ainsi qu’aux dégradations qu’une carte plastique peut rencontrer le long de sa durée de vie. La méthode de tatouage opère dans le domaine de Fourier car cette transformée présente des propriétés d’invariances aux attaques géométriques globales. Une méthode préventive consiste en un prétraitement de l’image originale avant le processus d’insertion qui réduit la variance du vecteur support de la marque. Une méthode corrective comporte deux contre-attaques corrigeant le flou et les variations colorimétriques. Pour une probabilité de fausse alarme de 10⁻⁴, nous avons obtenu une amélioration moyenne de 22% par rapport à la méthode de référence lorsque seule la méthode préventive est utilisée. La combinaison de la méthode préventive avec la méthode corrective correspond à un taux de détection supérieur à 99%. L’algorithme de détection prends moins de 1 seconde pour à une image de 512×512 pixels avec un ordinateur classique ce qui est compatible avec l’application industrielle visée

    Strategies for Unbridled Data Dissemination: An Emergency Operations Manual

    Get PDF
    This project is a study of free data dissemination and impediments to it. Drawing upon post-structuralism, Actor Network Theory, Participatory Action Research, and theories of the political stakes of the posthuman by way of Stirnerian egoism and illegalism, the project uses a number of theoretical, technical and legal texts to develop a hacker methodology that emphasizes close analysis and disassembly of existent systems of content control. Specifically, two tiers of content control mechanisms are examined: a legal tier, as exemplified by Intellectual Property Rights in the form of copyright and copyleft licenses, and a technical tier in the form of audio, video and text-based watermarking technologies. A series of demonstrative case studies are conducted to further highlight various means of content distribution restriction. A close reading of a copyright notice is performed in order to examine its internal contradictions. Examples of watermarking employed by academic e-book and journal publishers and film distributors are also examined and counter-forensic techniques for removing such watermarks are developed. The project finds that both legal and technical mechanisms for restricting the flow of content can be countervailed, which in turn leads to the development of different control mechanisms and in turn engenders another wave of evasion procedures. The undertaken methodological approach thus leads to the discovery of on-going mutation and adaptation of in-between states of resistance. Finally, an analysis of various existent filesharing applications is performed, and a new Tor-based BitTorrent tracker is set up to strengthen the anonymization of established filesharing methods. It is found that there exist potential de-anonymization attacks against all analyzed file-sharing tools, with potentially more secure filesharing options also seeing less user adoption

    Effectiveness of exhaustive search and template matching against watermark desynchronization

    No full text
    By focusing on a simple example, we investigate the effectiveness of exhaustive watermark detection and resynchronization through template matching against watermark desynchronization. We find that if the size of the search space does not increase exponentially, both methods provide asymptotically good results. We also show that the exhaustive search approach outperforms template matching from the point of view of reliable detection

    Contribution des filtres LPTV et des techniques d'interpolation au tatouage numérique

    Get PDF
    Les Changements d'Horloge Périodiques (PCC) et les filtres Linéaires Variant Périodiquement dans le Temps (LPTV) sont utilisés dans le domaine des télécommunications multi-utilisateurs. Dans cette thèse, nous montrons que, dans l'ensemble des techniques de tatouage par étalement de spectre, ils peuvent se substituer à la modulation par code pseudo-aléatoire. Les modules de décodage optimal, de resynchronisation, de pré-annulation des interférences et de quantification de la transformée d'étalement s'appliquent également aux PCC et aux filtres LPTV. Pour le modèle de signaux stationnaires blancs gaussiens, ces techniques présentent des performances identiques à l'étalement à Séquence Directe (DS) classique. Cependant, nous montrons que, dans le cas d'un signal corrélé localement, la luminance d'une image naturelle notamment, la périodicité des PCC et des filtres LPTV associée à un parcours d'image de type Peano-Hilbert conduit à de meilleures performances. Les filtres LPTV sont en outre un outil plus puissant qu'une simple modulation DS. Nous les utilisons pour effectuer un masquage spectral simultanément à l'étalement, ainsi qu'un rejet des interférences de l'image dans le domaine spectral. Cette dernière technique possède de très bonnes performances au décodage. Le second axe de cette thèse est l'étude des liens entre interpolation et tatouage numérique. Nous soulignons d'abord le rôle de l'interpolation dans les attaques sur la robustesse du tatouage. Nous construisons ensuite des techniques de tatouage bénéficiant des propriétés perceptuelles de l'interpolation. La première consiste en des masques perceptuels utilisant le bruit d'interpolation. Dans la seconde, un schéma de tatouage informé est construit autour de l'interpolation. Cet algorithme, qu'on peut relier aux techniques de catégorisation aléatoire, utilise des règles d'insertion et de décodage originales, incluant un masquage perceptuel intrinsèque. Outre ces bonnes propriétés perceptuelles, il présente un rejet des interférences de l'hôte et une robustesse à diverses attaques telles que les transformations valumétriques. Son niveau de sécurité est évalué à l'aide d'algorithmes d'attaque pratiques. ABSTRACT : Periodic Clock Changes (PCC) and Linear Periodically Time Varying (LPTV) filters have previously been applied to multi-user telecommunications in the Signal and Communications group of IRIT laboratory. In this thesis, we show that in each digital watermarking scheme involving spread-spectrum, they can be substituted to modulation by a pseudo-noise. The additional steps of optimal decoding, resynchronization, pre-cancellation of interference and quantization of a spread transform apply also to PCCs and LPTV filters. For white Gaussian stationary signals, these techniques offer similar performance as classical Direct Sequence (DS) spreading. However we show that, in the case of locally correlated signals such as image luminance, the periodicity of PCCs and LPTV filters associated to a Peano-Hilbert scan leads to better performance. Moreover, LPTV filters are a more powerful tool than simple DS modulation. We use LPTV filters to conduct spectrum masking simultaneous to spreading, as well as image interference cancellation in the spectral domain. The latter technique offers good decoding performance. The second axis of this thesis is the study of the links between interpolation and digital watermarking.We stress the role of interpolation in attacks on the watermark.We propose then watermarking techniques that benefit from interpolation perceptual properties. The first technique consists in constructing perceptualmasks proportional to an interpolation error. In the second technique, an informed watermarking scheme derives form interpolation. This scheme exhibits good perceptual properties, host-interference rejection and robustness to various attacks such as valumetric transforms. Its security level is assessed by ad hoc practical attack algorithms
    corecore