125 research outputs found
Robust Watermarking Using Inverse Gradient Attention
Watermarking is the procedure of encoding desired information into an image
to resist potential noises while ensuring the embedded image has little
perceptual perturbations from the original image. Recently, with the tremendous
successes gained by deep neural networks in various fields, digital
watermarking has attracted increasing number of attentions. The neglect of
considering the pixel importance within the cover image of deep neural models
will inevitably affect the model robustness for information hiding. Targeting
at the problem, in this paper, we propose a novel deep watermarking scheme with
Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and
attention mechanism to endow different importance to different pixels. With the
proposed method, the model is able to spotlight pixels with more robustness for
embedding data. Besides, from an orthogonal point of view, in order to increase
the model embedding capacity, we propose a complementary message coding module.
Empirically, extensive experiments show that the proposed model outperforms the
state-of-the-art methods on two prevalent datasets under multiple settings.Comment: 9 pages, 6 figure
Data Hiding with Deep Learning: A Survey Unifying Digital Watermarking and Steganography
Data hiding is the process of embedding information into a noise-tolerant
signal such as a piece of audio, video, or image. Digital watermarking is a
form of data hiding where identifying data is robustly embedded so that it can
resist tampering and be used to identify the original owners of the media.
Steganography, another form of data hiding, embeds data for the purpose of
secure and secret communication. This survey summarises recent developments in
deep learning techniques for data hiding for the purposes of watermarking and
steganography, categorising them based on model architectures and noise
injection methods. The objective functions, evaluation metrics, and datasets
used for training these data hiding models are comprehensively summarised.
Finally, we propose and discuss possible future directions for research into
deep data hiding techniques
Robust Distortion-free Watermarks for Language Models
We propose a methodology for planting watermarks in text from an
autoregressive language model that are robust to perturbations without changing
the distribution over text up to a certain maximum generation budget. We
generate watermarked text by mapping a sequence of random numbers -- which we
compute using a randomized watermark key -- to a sample from the language
model. To detect watermarked text, any party who knows the key can align the
text to the random number sequence. We instantiate our watermark methodology
with two sampling schemes: inverse transform sampling and exponential minimum
sampling. We apply these watermarks to three language models -- OPT-1.3B,
LLaMA-7B and Alpaca-7B -- to experimentally validate their statistical power
and robustness to various paraphrasing attacks. Notably, for both the OPT-1.3B
and LLaMA-7B models, we find we can reliably detect watermarked text () from tokens even after corrupting between -\% of the tokens
via random edits (i.e., substitutions, insertions or deletions). For the
Alpaca-7B model, we conduct a case study on the feasibility of watermarking
responses to typical user instructions. Due to the lower entropy of the
responses, detection is more difficult: around of the responses -- whose
median length is around tokens -- are detectable with , and
the watermark is also less robust to certain automated paraphrasing attacks we
implement
Publicly Detectable Watermarking for Language Models
We construct the first provable watermarking scheme for language models with
public detectability or verifiability: we use a private key for watermarking
and a public key for watermark detection. Our protocol is the first
watermarking scheme that does not embed a statistical signal in generated text.
Rather, we directly embed a publicly-verifiable cryptographic signature using a
form of rejection sampling. We show that our construction meets strong formal
security guarantees and preserves many desirable properties found in schemes in
the private-key watermarking setting. In particular, our watermarking scheme
retains distortion-freeness and model agnosticity. We implement our scheme and
make empirical measurements over open models in the 7B parameter range. Our
experiments suggest that our watermarking scheme meets our formal claims while
preserving text quality
WM-NET: Robust Deep 3D Watermarking with Limited Data
The goal of 3D mesh watermarking is to embed the message in 3D meshes that
can withstand various attacks imperceptibly and reconstruct the message
accurately from watermarked meshes. Traditional methods are less robust against
attacks. Recent DNN-based methods either introduce excessive distortions or
fail to embed the watermark without the help of texture information. However,
embedding the watermark in textures is insecure because replacing the texture
image can completely remove the watermark. In this paper, we propose a robust
deep 3D mesh watermarking WM-NET, which leverages attention-based convolutions
in watermarking tasks to embed binary messages in vertex distributions without
texture assistance. Furthermore, our WM-NET exploits the property that
simplified meshes inherit similar relations from the original ones, where the
relation is the offset vector directed from one vertex to its neighbor. By
doing so, our method can be trained on simplified meshes(limited data) but
remains effective on large-sized meshes (size adaptable) and unseen categories
of meshes (geometry adaptable). Extensive experiments demonstrate our method
brings 50% fewer distortions and 10% higher bit accuracy compared to previous
work. Our watermark WM-NET is robust against various mesh attacks, e.g. Gauss,
rotation, translation, scaling, and cropping
- …