225 research outputs found

    On the Saddle-point Solution and the Large-Coalition Asymptotics of Fingerprinting Games

    Full text link
    We study a fingerprinting game in which the number of colluders and the collusion channel are unknown. The encoder embeds fingerprints into a host sequence and provides the decoder with the capability to trace back pirated copies to the colluders. Fingerprinting capacity has recently been derived as the limit value of a sequence of maximin games with mutual information as their payoff functions. However, these games generally do not admit saddle-point solutions and are very hard to solve numerically. Here under the so-called Boneh-Shaw marking assumption, we reformulate the capacity as the value of a single two-person zero-sum game, and show that it is achieved by a saddle-point solution. If the maximal coalition size is k and the fingerprinting alphabet is binary, we show that capacity decays quadratically with k. Furthermore, we prove rigorously that the asymptotic capacity is 1/(k^2 2ln2) and we confirm our earlier conjecture that Tardos' choice of the arcsine distribution asymptotically maximizes the mutual information payoff function while the interleaving attack minimizes it. Along with the asymptotic behavior, numerical solutions to the game for small k are also presented.Comment: submitted to IEEE Trans. on Information Forensics and Securit

    Asymptotically false-positive-maximizing attack on non-binary Tardos codes

    Full text link
    We use a method recently introduced by Simone and Skoric to study accusation probabilities for non-binary Tardos fingerprinting codes. We generalize the pre-computation steps in this approach to include a broad class of collusion attack strategies. We analytically derive properties of a special attack that asymptotically maximizes false accusation probabilities. We present numerical results on sufficient code lengths for this attack, and explain the abrupt transitions that occur in these results

    Integration and optimization of collusion secure fingerprinting in image watermarking

    Get PDF
    Estágio realizado na Fraunhofer SIT - e orientado pelo Dr. Huajian Liu e pelo Dr. Marcel SchäferTese de mestrado integrado. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 201

    Multimedia Protection using Content and Embedded Fingerprints

    Get PDF
    Improved digital connectivity has made the Internet an important medium for multimedia distribution and consumption in recent years. At the same time, this increased proliferation of multimedia has raised significant challenges in secure multimedia distribution and intellectual property protection. This dissertation examines two complementary aspects of the multimedia protection problem that utilize content fingerprints and embedded collusion-resistant fingerprints. The first aspect considered is the automated identification of multimedia using content fingerprints, which is emerging as an important tool for detecting copyright violations on user generated content websites. A content fingerprint is a compact identifier that captures robust and distinctive properties of multimedia content, which can be used for uniquely identifying the multimedia object. In this dissertation, we describe a modular framework for theoretical modeling and analysis of content fingerprinting techniques. Based on this framework, we analyze the impact of distortions in the features on the corresponding fingerprints and also consider the problem of designing a suitable quantizer for encoding the features in order to improve the identification accuracy. The interaction between the fingerprint designer and a malicious adversary seeking to evade detection is studied under a game-theoretic framework and optimal strategies for both parties are derived. We then focus on analyzing and understanding the matching process at the fingerprint level. Models for fingerprints with different types of correlations are developed and the identification accuracy under each model is examined. Through this analysis we obtain useful guidelines for designing practical systems and also uncover connections to other areas of research. A complementary problem considered in this dissertation concerns tracing the users responsible for unauthorized redistribution of multimedia. Collusion-resistant fingerprints, which are signals that uniquely identify the recipient, are proactively embedded in the multimedia before redistribution and can be used for identifying the malicious users. We study the problem of designing collusion resistant fingerprints for embedding in compressed multimedia. Our study indicates that directly adapting traditional fingerprinting techniques to this new setting of compressed multimedia results in low collusion resistance. To withstand attacks, we propose an anti-collusion dithering technique for embedding fingerprints that significantly improves the collusion resistance compared to traditional fingerprints

    Wide-Sense Fingerprinting Codes and Honeycomb Arrays

    Get PDF
    Standard logo The logo should be reproduced in the primary colou Pantone 660c, on all publications printed in two or more colours. Refer to the ‘Branded merchandize’ sheet for guidelines on use on promotional items et The text, ‘University of London’, is set as a 50% transparency of white. Do not use a keyline see ‘Non standard background for exceptions. The College name has been specially drawn; please use the original digital artwork and do not try to re-set

    Group testing:an information theory perspective

    Get PDF
    The group testing problem concerns discovering a small number of defective items within a large population by performing tests on pools of items. A test is positive if the pool contains at least one defective, and negative if it contains no defectives. This is a sparse inference problem with a combinatorial flavour, with applications in medical testing, biology, telecommunications, information technology, data science, and more. In this monograph, we survey recent developments in the group testing problem from an information-theoretic perspective. We cover several related developments: efficient algorithms with practical storage and computation requirements, achievability bounds for optimal decoding methods, and algorithm-independent converse bounds. We assess the theoretical guarantees not only in terms of scaling laws, but also in terms of the constant factors, leading to the notion of the {\em rate} of group testing, indicating the amount of information learned per test. Considering both noiseless and noisy settings, we identify several regimes where existing algorithms are provably optimal or near-optimal, as well as regimes where there remains greater potential for improvement. In addition, we survey results concerning a number of variations on the standard group testing problem, including partial recovery criteria, adaptive algorithms with a limited number of stages, constrained test designs, and sublinear-time algorithms.Comment: Survey paper, 140 pages, 19 figures. To be published in Foundations and Trends in Communications and Information Theor

    Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

    Get PDF
    In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy
    • …
    corecore