537 research outputs found

    Intellectual Property Protection for Deep Learning Models: Taxonomy, Methods, Attacks, and Evaluations

    Full text link
    The training and creation of deep learning model is usually costly, thus it can be regarded as an intellectual property (IP) of the model creator. However, malicious users who obtain high-performance models may illegally copy, redistribute, or abuse the models without permission. To deal with such security threats, a few deep neural networks (DNN) IP protection methods have been proposed in recent years. This paper attempts to provide a review of the existing DNN IP protection works and also an outlook. First, we propose the first taxonomy for DNN IP protection methods in terms of six attributes: scenario, mechanism, capacity, type, function, and target models. Then, we present a survey on existing DNN IP protection works in terms of the above six attributes, especially focusing on the challenges these methods face, whether these methods can provide proactive protection, and their resistances to different levels of attacks. After that, we analyze the potential attacks on DNN IP protection methods from the aspects of model modifications, evasion attacks, and active attacks. Besides, a systematic evaluation method for DNN IP protection methods with respect to basic functional metrics, attack-resistance metrics, and customized metrics for different application scenarios is given. Lastly, future research opportunities and challenges on DNN IP protection are presented

    Identifying Appropriate Intellectual Property Protection Mechanisms for Machine Learning Models: A Systematization of Watermarking, Fingerprinting, Model Access, and Attacks

    Full text link
    The commercial use of Machine Learning (ML) is spreading; at the same time, ML models are becoming more complex and more expensive to train, which makes Intellectual Property Protection (IPP) of trained models a pressing issue. Unlike other domains that can build on a solid understanding of the threats, attacks and defenses available to protect their IP, the ML-related research in this regard is still very fragmented. This is also due to a missing unified view as well as a common taxonomy of these aspects. In this paper, we systematize our findings on IPP in ML, while focusing on threats and attacks identified and defenses proposed at the time of writing. We develop a comprehensive threat model for IP in ML, categorizing attacks and defenses within a unified and consolidated taxonomy, thus bridging research from both the ML and security communities

    A Hybrid Digital Watermarking Approach Using Wavelets and LSB

    Get PDF
    The present paper proposed a novel approach called Wavelet based Least Significant Bit Watermarking (WLSBWM) for high authentication, security and copyright protection. Alphabet Pattern (AP) approach is used to generate shuffled image in the first stage and Pell’s Cat Map (PCM) is used for providing more security and strong protection from attacks. PCM applied on each 5×5 sub images. A wavelet concept is used to reduce the dimensionality of the image until it equals to the size of the watermark image. Discrete Cosign Transform is applied in the first stage; later N level Discrete Wavelet Transform (DWT) is applied for reducing up to the size of the watermark image. The water mark image is inserted in LHn Sub band of the wavelet image using LSB concept. Simulation results show that the proposed technique produces better PSNR and similarity measure. The experimental results indicate that the present approach is more reliable and secure efficient.The robustness of the proposed scheme is evaluated against various image-processing attacks

    Are Social Networks Watermarking Us or Are We (Unawarely) Watermarking Ourself?

    Get PDF
    In the last decade, Social Networks (SNs) have deeply changed many aspects of society, and one of the most widespread behaviours is the sharing of pictures. However, malicious users often exploit shared pictures to create fake profiles, leading to the growth of cybercrime. Thus, keeping in mind this scenario, authorship attribution and verification through image watermarking techniques are becoming more and more important. In this paper, we firstly investigate how thirteen of the most popular SNs treat uploaded pictures in order to identify a possible implementation of image watermarking techniques by respective SNs. Second, we test the robustness of several image watermarking algorithms on these thirteen SNs. Finally, we verify whether a method based on the Photo-Response Non-Uniformity (PRNU) technique, which is usually used in digital forensic or image forgery detection activities, can be successfully used as a watermarking approach for authorship attribution and verification of pictures on SNs. The proposed method is sufficiently robust, in spite of the fact that pictures are often downgraded during the process of uploading to the SNs. Moreover, in comparison to conventional watermarking methods the proposed method can successfully pass through different SNs, solving related problems such as profile linking and fake profile detection. The results of our analysis on a real dataset of 8400 pictures show that the proposed method is more effective than other watermarking techniques and can help to address serious questions about privacy and security on SNs. Moreover, the proposed method paves the way for the definition of multi-factor online authentication mechanisms based on robust digital features

    Black-box Dataset Ownership Verification via Backdoor Watermarking

    Full text link
    Deep learning, especially deep neural networks (DNNs), has been widely and successfully adopted in many critical applications for its high effectiveness and efficiency. The rapid development of DNNs has benefited from the existence of some high-quality datasets (e.g.e.g., ImageNet), which allow researchers and developers to easily verify the performance of their methods. Currently, almost all existing released datasets require that they can only be adopted for academic or educational purposes rather than commercial purposes without permission. However, there is still no good way to ensure that. In this paper, we formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model, where defenders can only query the model while having no information about its parameters and training details. Based on this formulation, we propose to embed external patterns via backdoor watermarking for the ownership verification to protect them. Our method contains two main parts, including dataset watermarking and dataset verification. Specifically, we exploit poison-only backdoor attacks (e.g.e.g., BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification. We also provide some theoretical analyses of our methods. Experiments on multiple benchmark datasets of different tasks are conducted, which verify the effectiveness of our method. The code for reproducing main experiments is available at \url{https://github.com/THUYimingLi/DVBW}.Comment: This paper is accepted by IEEE TIFS. 15 pages. The preliminary short version of this paper was posted on arXiv (arXiv:2010.05821) and presented in a non-archival NeurIPS Workshop (2020

    Working Phases and Novel Techniques of Digital Watermarking

    Get PDF
    Digital watermarking is the way toward encoding hidden copyright information in aimage by making little alterations in its pixel content. For this situation watermarking doesn't limit the getting to image data. The significant function of watermarking is to stay present in information for verification of ownership. The utilization of Digital watermarking isn't limited up to copyright security. Digital watermarking can likewise be utilized for owner identification to identify content of owner, fingerprinting to identify purchaser of content, broadcast monitoring and authentication to determine if the information is transformed from it's original form or not

    A Systematic Review on Model Watermarking for Neural Networks

    Get PDF
    Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages the development of ever more complex and valuable models. These models are considered intellectual property of the legitimate parties who have trained them, which makes their protection against stealing, illegitimate redistribution, and unauthorized application an urgent need. Digital watermarking presents a strong mechanism for marking model ownership and, thereby, offers protection against those threats. This work presents a taxonomy identifying and analyzing different classes of watermarking schemes for ML models. It introduces a unified threat model to allow structured reasoning on and comparison of the effectiveness of watermarking methods in different scenarios. Furthermore, it systematizes desired security requirements and attacks against ML model watermarking. Based on that framework, representative literature from the field is surveyed to illustrate the taxonomy. Finally, shortcomings and general limitations of existing approaches are discussed, and an outlook on future research directions is given

    A Novel Approch for Digital Image Watermarking Using Cryptography

    Get PDF
    Medicinal images can be made more secure by using enhanced watermarking technique; it allows us to embed the related information with the image, which provides secrecy, integrity and validation by embedding encrypted digital signature with the image. The diverse characteristics of watermarking algorithms are discussed in this paper. The performance evaluation of embedding the watermark in DWT domains is analyzed taking PSNR and MSE as the evaluation parameters. In this paper, data hiding and cryptographic techniques are combined into one secure simple algorithm. So, the original image is not mandatory at the time of watermark recovery. Because we insert final watermark in DWT domain, so this procedure is robust against many attacks
    • …
    corecore