24,067 research outputs found
PreGIP: Watermarking the Pretraining of Graph Neural Networks for Deep Intellectual Property Protection
Pretraining on Graph Neural Networks (GNNs) has shown great power in
facilitating various downstream tasks. As pretraining generally requires huge
amount of data and computational resources, the pretrained GNNs are high-value
Intellectual Properties (IP) of the legitimate owner. However, adversaries may
illegally copy and deploy the pretrained GNN models for their downstream tasks.
Though initial efforts have been made to watermark GNN classifiers for IP
protection, these methods require the target classification task for
watermarking, and thus are not applicable to self-supervised pretraining of GNN
models. Hence, in this work, we propose a novel framework named PreGIP to
watermark the pretraining of GNN encoder for IP protection while maintain the
high-quality of the embedding space. PreGIP incorporates a task-free
watermarking loss to watermark the embedding space of pretrained GNN encoder. A
finetuning-resistant watermark injection is further deployed. Theoretical
analysis and extensive experiments show the effectiveness of {\method} in IP
protection and maintaining high-performance for downstream tasks
An Embarrassingly Simple Approach for Intellectual Property Rights Protection on Recurrent Neural Networks
Capitalise on deep learning models, offering Natural Language Processing
(NLP) solutions as a part of the Machine Learning as a Service (MLaaS) has
generated handsome revenues. At the same time, it is known that the creation of
these lucrative deep models is non-trivial. Therefore, protecting these
inventions intellectual property rights (IPR) from being abused, stolen and
plagiarized is vital. This paper proposes a practical approach for the IPR
protection on recurrent neural networks (RNN) without all the bells and
whistles of existing IPR solutions. Particularly, we introduce the Gatekeeper
concept that resembles the recurrent nature in RNN architecture to embed keys.
Also, we design the model training scheme in a way such that the protected RNN
model will retain its original performance iff a genuine key is presented.
Extensive experiments showed that our protection scheme is robust and effective
against ambiguity and removal attacks in both white-box and black-box
protection schemes on different RNN variants. Code is available at
https://github.com/zhiqin1998/RecurrentIPRComment: Accepted at AACL-IJCNLP 2022 (Fig. 1 updated
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
With the widespread use of machine learning (ML) techniques, ML as a service
has become increasingly popular. In this setting, an ML model resides on a
server and users can query it with their data via an API. However, if the
user's input is sensitive, sending it to the server is undesirable and
sometimes even legally not possible. Equally, the service provider does not
want to share the model by sending it to the client for protecting its
intellectual property and pay-per-query business model.
In this paper, we propose MLCapsule, a guarded offline deployment of machine
learning as a service. MLCapsule executes the model locally on the user's side
and therefore the data never leaves the client. Meanwhile, MLCapsule offers the
service provider the same level of control and security of its model as the
commonly used server-side execution. In addition, MLCapsule is applicable to
offline applications that require local execution. Beyond protecting against
direct model access, we couple the secure offline deployment with defenses
against advanced attacks on machine learning models such as model stealing,
reverse engineering, and membership inference
Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era--The Human-like Authors are Already Here- A New Model
Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI Multi- Player paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems
Intellectual Property Protection for Deep Learning Models: Taxonomy, Methods, Attacks, and Evaluations
The training and creation of deep learning model is usually costly, thus it
can be regarded as an intellectual property (IP) of the model creator. However,
malicious users who obtain high-performance models may illegally copy,
redistribute, or abuse the models without permission. To deal with such
security threats, a few deep neural networks (DNN) IP protection methods have
been proposed in recent years. This paper attempts to provide a review of the
existing DNN IP protection works and also an outlook. First, we propose the
first taxonomy for DNN IP protection methods in terms of six attributes:
scenario, mechanism, capacity, type, function, and target models. Then, we
present a survey on existing DNN IP protection works in terms of the above six
attributes, especially focusing on the challenges these methods face, whether
these methods can provide proactive protection, and their resistances to
different levels of attacks. After that, we analyze the potential attacks on
DNN IP protection methods from the aspects of model modifications, evasion
attacks, and active attacks. Besides, a systematic evaluation method for DNN IP
protection methods with respect to basic functional metrics, attack-resistance
metrics, and customized metrics for different application scenarios is given.
Lastly, future research opportunities and challenges on DNN IP protection are
presented
- …