12 research outputs found
Experimental assessment of the reliability for watermarking and fingerprinting schemes
We introduce the concept of reliability in watermarking as the ability of assessing that a probability of false alarm is very low and below a given significance level. We propose an iterative and self-adapting algorithm which estimates very low probabilities of error. It performs much quicker and more accurately than a classical Monte Carlo estimator. The article finishes with applications to zero-bit watermarking (probability of false alarm, error exponent), and to probabilistic fingerprinting codes (probability of wrongly accusing a given user, code length estimation)
Experimental Assessment of the Reliability for Watermarking and Fingerprinting Schemes
International audienceWe introduce the concept of reliability in watermarking as the ability of assessing that a probability of false alarm is very low and below a given significance level. We propose an iterative and self-adapting algorithm which estimates very low probabilities of error. It performs much quicker and more accurately than a classical Monte Carlo estimator. The article finishes with applications to zero-bit watermarking (probability of false alarm, error exponent), and to probabilistic fingerprinting codes (probability of wrongly accusing a given user, code length estimation)
The Stable Signature: Rooting Watermarks in Latent Diffusion Models
Generative image modeling enables a wide range of applications but raises
ethical concerns about responsible deployment. This paper introduces an active
strategy combining image watermarking and Latent Diffusion Models. The goal is
for all generated images to conceal an invisible watermark allowing for future
detection and/or identification. The method quickly fine-tunes the latent
decoder of the image generator, conditioned on a binary signature. A
pre-trained watermark extractor recovers the hidden signature from any
generated image and a statistical test then determines whether it comes from
the generative model. We evaluate the invisibility and robustness of the
watermarks on a variety of generation tasks, showing that Stable Signature
works even after the images are modified. For instance, it detects the origin
of an image generated from a text prompt, then cropped to keep of the
content, with + accuracy at a false positive rate below 10.Comment: Website at https://pierrefdz.github.io/publications/stablesignatur
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate usersâ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator
Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
Data Exfiltration:A Review of External Attack Vectors and Countermeasures
AbstractContext One of the main targets of cyber-attacks is data exfiltration, which is the leakage of sensitive or private data to an unauthorized entity. Data exfiltration can be perpetrated by an outsider or an insider of an organization. Given the increasing number of data exfiltration incidents, a large number of data exfiltration countermeasures have been developed. These countermeasures aim to detect, prevent, or investigate exfiltration of sensitive or private data. With the growing interest in data exfiltration, it is important to review data exfiltration attack vectors and countermeasures to support future research in this field. Objective This paper is aimed at identifying and critically analysing data exfiltration attack vectors and countermeasures for reporting the status of the art and determining gaps for future research. Method We have followed a structured process for selecting 108 papers from seven publication databases. Thematic analysis method has been applied to analyse the extracted data from the reviewed papers. Results We have developed a classification of (1) data exfiltration attack vectors used by external attackers and (2) the countermeasures in the face of external attacks. We have mapped the countermeasures to attack vectors. Furthermore, we have explored the applicability of various countermeasures for different states of data (i.e., in use, in transit, or at rest). Conclusion This review has revealed that (a) most of the state of the art is focussed on preventive and detective countermeasures and significant research is required on developing investigative countermeasures that are equally important; (b) Several data exfiltration countermeasures are not able to respond in real-time, which specifies that research efforts need to be invested to enable them to respond in real-time (c) A number of data exfiltration countermeasures do not take privacy and ethical concerns into consideration, which may become an obstacle in their full adoption (d) Existing research is primarily focussed on protecting data in âin useâ state, therefore, future research needs to be directed towards securing data in âin restâ and âin transitâ states (e) There is no standard or framework for evaluation of data exfiltration countermeasures. We assert the need for developing such an evaluation framework
Tatouage du flux compressé MPEG-4 AVC
La prĂ©sente thĂšse aborde le sujet de tatouage du flux MPEG-4 AVC sur ses deux volets thĂ©oriques et applicatifs en considĂ©rant deux domaines applicatifs Ă savoir la protection du droit d auteur et la vĂ©rification de l'intĂ©gritĂ© du contenu. Du point de vue thĂ©orique, le principal enjeu est de dĂ©velopper un cadre de tatouage unitaire en mesure de servir les deux applications mentionnĂ©es ci-dessus. Du point de vue mĂ©thodologique, le dĂ©fi consiste Ă instancier ce cadre thĂ©orique pour servir les applications visĂ©es. La premiĂšre contribution principale consiste Ă dĂ©finir un cadre thĂ©orique pour le tatouage multi symboles Ă base de modulation d index de quantification (m-QIM). La rĂšgle d insertion QIM a Ă©tĂ© gĂ©nĂ©ralisĂ©e du cas binaire au cas multi-symboles et la rĂšgle de dĂ©tection optimale (minimisant la probabilitĂ© d erreur Ă la dĂ©tection en condition du bruit blanc, additif et gaussien) a Ă©tĂ© Ă©tablie. Il est ainsi dĂ©montrĂ© que la quantitĂ© d information insĂ©rĂ©e peut ĂȘtre augmentĂ©e par un facteur de log2m tout en gardant les mĂȘmes contraintes de robustesse et de transparence. Une quantitĂ© d information de 150 bits par minutes, soit environ 20 fois plus grande que la limite imposĂ©e par la norme DCI est obtenue. La deuxiĂšme contribution consiste Ă spĂ©cifier une opĂ©ration de prĂ©traitement qui permet d Ă©liminer les impactes du phĂ©nomĂšne du drift (propagation de la distorsion) dans le flux compressĂ© MPEG-4 AVC. D abord, le problĂšme a Ă©tĂ© formalisĂ© algĂ©briquement en se basant sur les expressions analytiques des opĂ©rations d encodage. Ensuite, le problĂšme a Ă©tĂ© rĂ©solu sous la contrainte de prĂ©vention du drift. Une amĂ©lioration de la transparence avec des gains de 2 dB en PSNR est obtenueThe present thesis addresses the MPEG-4 AVC stream watermarking and considers two theoretical and applicative challenges, namely ownership protection and content integrity verification.From the theoretical point of view, the thesis main challenge is to develop a unitary watermarking framework (insertion/detection) able to serve the two above mentioned applications in the compressed domain. From the methodological point of view, the challenge is to instantiate this theoretical framework for serving the targeted applications. The thesis first main contribution consists in building the theoretical framework for the multi symbol watermarking based on quantization index modulation (m-QIM). The insertion rule is analytically designed by extending the binary QIM rule. The detection rule is optimized so as to ensure minimal probability of error under additive white Gaussian noise distributed attacks. It is thus demonstrated that the data payload can be increased by a factor of log2m, for prescribed transparency and additive Gaussian noise power. A data payload of 150 bits per minute, i.e. about 20 times larger than the limit imposed by the DCI standard, is obtained. The thesis second main theoretical contribution consists in specifying a preprocessing MPEG-4 AVC shaping operation which can eliminate the intra-frame drift effect. The drift represents the distortion spread in the compressed stream related to the MPEG encoding paradigm. In this respect, the drift distortion propagation problem in MPEG-4 AVC is algebraically expressed and the corresponding equations system is solved under drift-free constraints. The drift-free shaping results in gain in transparency of 2 dB in PSNREVRY-INT (912282302) / SudocSudocFranceF
Impacts of Watermarking Security on Tardos-based Fingerprinting
International audienceThis article presents a study of the embedding of Tardos binary fingerprinting codes with watermarking techniques. By taking into account the security of the embedding scheme, we present a new approach for colluding strategies which relies on the possible estimation error rate of the code symbols (denoted epsilon). We derive a new attack strategy called "epsilon-Worst Case Attack" and show its efficiency using the computation of achievable rates for simple decoding. Then we consider the interplay between security and robustness regarding the accusation performances of the fingerprinting scheme and show that (1) for a same accusation rate secure schemes can afford to be less robust than insecure ones, and (2) that secure schemes enable to cast the Worst Case Attack into an interleaving attack. Additionally, we use the security analysis of the watermarking scheme to derive from epsilon a security attack for a fingerprinting scheme based on Tardos codes and a new scheme called stochastic spread-spectrum watermarking. We compare a removal attack against an AWGN robustness attack and we show that for the same distortion, the combination of a fingerprinting attack and a security attack easily outperform classical attacks even with a small number of observations