8,441 research outputs found
Spread spectrum-based video watermarking algorithms for copyright protection
Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can
now benefit from hardware and software which was considered state-of-the-art several years
ago. The advantages offered by the digital technologies are major but the same digital
technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly
possible and relatively easy, in spite of various forms of protection, but due to the analogue
environment, the subsequent copies had an inherent loss in quality. This was a natural way of
limiting the multiple copying of a video material. With digital technology, this barrier
disappears, being possible to make as many copies as desired, without any loss in quality
whatsoever. Digital watermarking is one of the best available tools for fighting this threat.
The aim of the present work was to develop a digital watermarking system compliant with the
recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark
can be inserted in either spatial domain or transform domain, this aspect was investigated and
led to the conclusion that wavelet transform is one of the best solutions available. Since
watermarking is not an easy task, especially considering the robustness under various attacks
several techniques were employed in order to increase the capacity/robustness of the system:
spread-spectrum and modulation techniques to cast the watermark, powerful error correction
to protect the mark, human visual models to insert a robust mark and to ensure its invisibility.
The combination of these methods led to a major improvement, but yet the system wasn't
robust to several important geometrical attacks. In order to achieve this last milestone, the
system uses two distinct watermarks: a spatial domain reference watermark and the main
watermark embedded in the wavelet domain. By using this reference watermark and techniques
specific to image registration, the system is able to determine the parameters of the attack and
revert it. Once the attack was reverted, the main watermark is recovered. The final result is a
high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen
PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN Models
Deep neural networks (DNNs) have achieved tremendous success in artificial
intelligence (AI) fields. However, DNN models can be easily illegally copied,
redistributed, or abused by criminals, seriously damaging the interests of
model inventors. The copyright protection of DNN models by neural network
watermarking has been studied, but the establishment of a traceability
mechanism for determining the authorized users of a leaked model is a new
problem driven by the demand for AI services. Because the existing traceability
mechanisms are used for models without watermarks, a small number of
false-positives are generated. Existing black-box active protection schemes
have loose authorization control and are vulnerable to forgery attacks.
Therefore, based on the idea of black-box neural network watermarking with the
video framing and image perceptual hash algorithm, a passive copyright
protection and traceability framework PCPT is proposed that uses an additional
class of DNN models, improving the existing traceability mechanism that yields
a small number of false-positives. Based on an authorization control strategy
and image perceptual hash algorithm, a DNN model active copyright protection
and traceability framework ACPT is proposed. This framework uses the
authorization control center constructed by the detector and verifier. This
approach realizes stricter authorization control, which establishes a strong
connection between users and model owners, improves the framework security, and
supports traceability verification
Fingerprint Authentication by Wavelet-based Digital Watermarking
In this manuscript, a wavelet-based blind watermarking scheme has been proposed as a means to provide protection against false matching of a possibly tampered fingerprint by embedding a binary name label of the fingerprint owner in the fingerprint itself. Embedding watermarks in the detail regions allow us to increase the robustness of our watermark, at little to no additional impact on image quality. It has been experimentally shown that when a binary watermark is embedded into detail coefficients of an indexed fingerprint image in a spread spectrum fashion, the perceptual invisibility and robustness have anticlinal response to change in amplification factor “K” and smaller watermarks have better transparency than the larger ones. The DWT-based technique has been found to give better robustness against noises, geometrical distortions, filtering and JPEG compression attack than other frequency domain watermarking techniques.DOI:http://dx.doi.org/10.11591/ijece.v2i4.50
ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples
The training of Deep Neural Networks (DNN) is costly, thus DNN can be
considered as the intellectual properties (IP) of model owners. To date, most
of the existing protection works focus on verifying the ownership after the DNN
model is stolen, which cannot resist piracy in advance. To this end, we propose
an active DNN IP protection method based on adversarial examples against DNN
piracy, named ActiveGuard. ActiveGuard aims to achieve authorization control
and users' fingerprints management through adversarial examples, and can
provide ownership verification. Specifically, ActiveGuard exploits the
elaborate adversarial examples as users' fingerprints to distinguish authorized
users from unauthorized users. Legitimate users can enter fingerprints into DNN
for identity authentication and authorized usage, while unauthorized users will
obtain poor model performance due to an additional control layer. In addition,
ActiveGuard enables the model owner to embed a watermark into the weights of
DNN. When the DNN is illegally pirated, the model owner can extract the
embedded watermark and perform ownership verification. Experimental results
show that, for authorized users, the test accuracy of LeNet-5 and Wide Residual
Network (WRN) models are 99.15% and 91.46%, respectively, while for
unauthorized users, the test accuracy of the two DNNs are only 8.92% (LeNet-5)
and 10% (WRN), respectively. Besides, each authorized user can pass the
fingerprint authentication with a high success rate (up to 100%). For ownership
verification, the embedded watermark can be successfully extracted, while the
normal performance of the DNN model will not be affected. Further, ActiveGuard
is demonstrated to be robust against fingerprint forgery attack, model
fine-tuning attack and pruning attack
Deep Intellectual Property: A Survey
With the widespread application in industrial manufacturing and commercial
services, well-trained deep neural networks (DNNs) are becoming increasingly
valuable and crucial assets due to the tremendous training cost and excellent
generalization performance. These trained models can be utilized by users
without much expert knowledge benefiting from the emerging ''Machine Learning
as a Service'' (MLaaS) paradigm. However, this paradigm also exposes the
expensive models to various potential threats like model stealing and abuse. As
an urgent requirement to defend against these threats, Deep Intellectual
Property (DeepIP), to protect private training data, painstakingly-tuned
hyperparameters, or costly learned model weights, has been the consensus of
both industry and academia. To this end, numerous approaches have been proposed
to achieve this goal in recent years, especially to prevent or discover model
stealing and unauthorized redistribution. Given this period of rapid evolution,
the goal of this paper is to provide a comprehensive survey of the recent
achievements in this field. More than 190 research contributions are included
in this survey, covering many aspects of Deep IP Protection:
challenges/threats, invasive solutions (watermarking), non-invasive solutions
(fingerprinting), evaluation metrics, and performance. We finish the survey by
identifying promising directions for future research.Comment: 38 pages, 12 figure
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Current learning machines have successfully solved hard application problems,
reaching high accuracy and displaying seemingly "intelligent" behavior. Here we
apply recent techniques for explaining decisions of state-of-the-art learning
machines and analyze various tasks from computer vision and arcade games. This
showcases a spectrum of problem-solving behaviors ranging from naive and
short-sighted, to well-informed and strategic. We observe that standard
performance evaluation metrics can be oblivious to distinguishing these diverse
problem solving behaviors. Furthermore, we propose our semi-automated Spectral
Relevance Analysis that provides a practically effective way of characterizing
and validating the behavior of nonlinear learning machines. This helps to
assess whether a learned model indeed delivers reliably for the problem that it
was conceived for. Furthermore, our work intends to add a voice of caution to
the ongoing excitement about machine intelligence and pledges to evaluate and
judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication
Intellectual Property Protection for Deep Learning Models: Taxonomy, Methods, Attacks, and Evaluations
The training and creation of deep learning model is usually costly, thus it
can be regarded as an intellectual property (IP) of the model creator. However,
malicious users who obtain high-performance models may illegally copy,
redistribute, or abuse the models without permission. To deal with such
security threats, a few deep neural networks (DNN) IP protection methods have
been proposed in recent years. This paper attempts to provide a review of the
existing DNN IP protection works and also an outlook. First, we propose the
first taxonomy for DNN IP protection methods in terms of six attributes:
scenario, mechanism, capacity, type, function, and target models. Then, we
present a survey on existing DNN IP protection works in terms of the above six
attributes, especially focusing on the challenges these methods face, whether
these methods can provide proactive protection, and their resistances to
different levels of attacks. After that, we analyze the potential attacks on
DNN IP protection methods from the aspects of model modifications, evasion
attacks, and active attacks. Besides, a systematic evaluation method for DNN IP
protection methods with respect to basic functional metrics, attack-resistance
metrics, and customized metrics for different application scenarios is given.
Lastly, future research opportunities and challenges on DNN IP protection are
presented
Print-Scan Resilient Text Image Watermarking Based on Stroke Direction Modulation for Chinese Document Authentication
Print-scan resilient watermarking has emerged as an attractive way for document security. This paper proposes an stroke direction modulation technique for watermarking in Chinese text images. The watermark produced by the idea offers robustness to print-photocopy-scan, yet provides relatively high embedding capacity without losing the transparency. During the embedding phase, the angle of rotatable strokes are quantized to embed the bits. This requires several stages of preprocessing, including stroke generation, junction searching, rotatable stroke decision and character partition. Moreover, shuffling is applied to equalize the uneven embedding capacity. For the data detection, denoising and deskewing mechanisms are used to compensate for the distortions induced by hardcopy. Experimental results show that our technique attains high detection accuracy against distortions resulting from print-scan operations, good quality photocopies and benign attacks in accord with the future goal of soft authentication
- …