211 research outputs found

    Helina fratercula (Zetterstedt, 1845) (Diptera: Muscidae) newly recorded from China, with a redescription of male

    Get PDF
    Helina fratercula (Zetterstedt, 1845), so far known only from Central Europe, is newly recorded from China. The species is redescribed in detail morphological characters. The characteristic photos and the illustrations of male terminalia based on the specimens from Xinjiang are provided, and also incorporated into the existing key of Helina males of China

    Helina subpyriforma sp. n., a newmuscid fly (Diptera: Muscidae) from Yunnan, China

    Get PDF
    Helina subpyriforma Wang sp. n., a species from Yunnan, China, is described and illustrated as new to science. The new species can be assigned to the Helina quadrum-group, based on male morphological and genitalic structures. The species is also incorporated into the existing key of H. quadrum-group (males) from China

    Intellectual Property Protection for Deep Learning Models: Taxonomy, Methods, Attacks, and Evaluations

    Full text link
    The training and creation of deep learning model is usually costly, thus it can be regarded as an intellectual property (IP) of the model creator. However, malicious users who obtain high-performance models may illegally copy, redistribute, or abuse the models without permission. To deal with such security threats, a few deep neural networks (DNN) IP protection methods have been proposed in recent years. This paper attempts to provide a review of the existing DNN IP protection works and also an outlook. First, we propose the first taxonomy for DNN IP protection methods in terms of six attributes: scenario, mechanism, capacity, type, function, and target models. Then, we present a survey on existing DNN IP protection works in terms of the above six attributes, especially focusing on the challenges these methods face, whether these methods can provide proactive protection, and their resistances to different levels of attacks. After that, we analyze the potential attacks on DNN IP protection methods from the aspects of model modifications, evasion attacks, and active attacks. Besides, a systematic evaluation method for DNN IP protection methods with respect to basic functional metrics, attack-resistance metrics, and customized metrics for different application scenarios is given. Lastly, future research opportunities and challenges on DNN IP protection are presented

    Characterising the shear resistance of a unidirectional non‑crimp glass fabric using modified picture frame and uniaxial bias extension test methods

    Get PDF
    The forming behaviour of a unidirectional non-crimp fabric (UD-NCF) consisting of polyamide stitches with a tricot-chain stitching pattern is explored. Notably, there are no stabilising tows orientated transverse to the main tow direction in this fabric, a common feature in many ‘quasi’ UD-NCFs, this allows extension of the stitch in the transverse direction under certain loading conditions. The lack of stabilising tows introduces a possible low-energy deformation mode to the UD-NCF, which is absent in biaxial fabrics and to a large extent in ‘quasi’ UD-NCFs. The in-plane shear behaviour is initially investigated using both standard ‘tightly-clamped’ picture frame tests and uniaxial bias extension tests. Preliminary results show a dramatic difference in results produced by the two test methods. During the picture frame test, fibres can be subjected to unintended tension due to sample misalignment in the picture frame rig. To mitigate error arising from this effect, the picture frame test procedure is modified in two different ways: by using an intentional pre-displacement of the picture frame rig, and by changing the clamping condition of test specimen. Results show that the modified picture frame test data contain less error than the standard ‘tightly-clamped’ test but also that the shear stiffness of the UD-NCF is notably lower when measured in the bias extension test compared to the picture frame test, mainly due to the difference in loading conditions imposed during the two tests

    Detect and remove watermark in deep neural networks via generative adversarial networks

    Full text link
    Deep neural networks (DNN) have achieved remarkable performance in various fields. However, training a DNN model from scratch requires a lot of computing resources and training data. It is difficult for most individual users to obtain such computing resources and training data. Model copyright infringement is an emerging problem in recent years. For instance, pre-trained models may be stolen or abuse by illegal users without the authorization of the model owner. Recently, many works on protecting the intellectual property of DNN models have been proposed. In these works, embedding watermarks into DNN based on backdoor is one of the widely used methods. However, when the DNN model is stolen, the backdoor-based watermark may face the risk of being detected and removed by an adversary. In this paper, we propose a scheme to detect and remove watermark in deep neural networks via generative adversarial networks (GAN). We demonstrate that the backdoor-based DNN watermarks are vulnerable to the proposed GAN-based watermark removal attack. The proposed attack method includes two phases. In the first phase, we use the GAN and few clean images to detect and reverse the watermark in the DNN model. In the second phase, we fine-tune the watermarked DNN based on the reversed backdoor images. Experimental evaluations on the MNIST and CIFAR10 datasets demonstrate that, the proposed method can effectively remove about 98% of the watermark in DNN models, as the watermark retention rate reduces from 100% to less than 2% after applying the proposed attack. In the meantime, the proposed attack hardly affects the model's performance. The test accuracy of the watermarked DNN on the MNIST and the CIFAR10 datasets drops by less than 1% and 3%, respectively

    ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples

    Full text link
    The training of Deep Neural Networks (DNN) is costly, thus DNN can be considered as the intellectual properties (IP) of model owners. To date, most of the existing protection works focus on verifying the ownership after the DNN model is stolen, which cannot resist piracy in advance. To this end, we propose an active DNN IP protection method based on adversarial examples against DNN piracy, named ActiveGuard. ActiveGuard aims to achieve authorization control and users' fingerprints management through adversarial examples, and can provide ownership verification. Specifically, ActiveGuard exploits the elaborate adversarial examples as users' fingerprints to distinguish authorized users from unauthorized users. Legitimate users can enter fingerprints into DNN for identity authentication and authorized usage, while unauthorized users will obtain poor model performance due to an additional control layer. In addition, ActiveGuard enables the model owner to embed a watermark into the weights of DNN. When the DNN is illegally pirated, the model owner can extract the embedded watermark and perform ownership verification. Experimental results show that, for authorized users, the test accuracy of LeNet-5 and Wide Residual Network (WRN) models are 99.15% and 91.46%, respectively, while for unauthorized users, the test accuracy of the two DNNs are only 8.92% (LeNet-5) and 10% (WRN), respectively. Besides, each authorized user can pass the fingerprint authentication with a high success rate (up to 100%). For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected. Further, ActiveGuard is demonstrated to be robust against fingerprint forgery attack, model fine-tuning attack and pruning attack
    • …
    corecore