9,349 research outputs found

    Applications in security and evasions in machine learning : a survey

    Get PDF
    In recent years, machine learning (ML) has become an important part to yield security and privacy in various applications. ML is used to address serious issues such as real-time attack detection, data leakage vulnerability assessments and many more. ML extensively supports the demanding requirements of the current scenario of security and privacy across a range of areas such as real-time decision-making, big data processing, reduced cycle time for learning, cost-efficiency and error-free processing. Therefore, in this paper, we review the state of the art approaches where ML is applicable more effectively to fulfill current real-world requirements in security. We examine different security applications' perspectives where ML models play an essential role and compare, with different possible dimensions, their accuracy results. By analyzing ML algorithms in security application it provides a blueprint for an interdisciplinary research area. Even with the use of current sophisticated technology and tools, attackers can evade the ML models by committing adversarial attacks. Therefore, requirements rise to assess the vulnerability in the ML models to cope up with the adversarial attacks at the time of development. Accordingly, as a supplement to this point, we also analyze the different types of adversarial attacks on the ML models. To give proper visualization of security properties, we have represented the threat model and defense strategies against adversarial attack methods. Moreover, we illustrate the adversarial attacks based on the attackers' knowledge about the model and addressed the point of the model at which possible attacks may be committed. Finally, we also investigate different types of properties of the adversarial attacks

    DNN model extraction attacks using prediction interfaces

    Get PDF
    Machine learning (ML) and deep learning methods have become common and publicly available, while ML security to date struggles to cope with rising threats. One rising threat is model extraction attacks where adversaries are able to reproduce a target model close to perfection. The attack is widely deployable since the attacker needs only to have access to predictions to perform this attack. Stolen ML models could either be used for personal advantage to abuse paid prediction services or to create transferable adversarial examples that can be used to undermine the integrity of prediction services, i.e. prediction quality. This is a significant threat in several application areas, such as in autonomous driving, which rely heavily of computer vision via deep neural networks. In this thesis, we reproduce existing model extraction attacks and evaluate novel techniques to extract deep neural network (DNN) classifiers. We introduce new synthetic query generation strategies, and demonstrate their efficiency at extracting models for creating transferable targeted adversarial examples from stolen DNNs

    Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

    Full text link
    In this work, we investigate the concept of biometric backdoors: a template poisoning attack on biometric systems that allows adversaries to stealthily and effortlessly impersonate users in the long-term by exploiting the template update procedure. We show that such attacks can be carried out even by attackers with physical limitations (no digital access to the sensor) and zero knowledge of training data (they know neither decision boundaries nor user template). Based on the adversaries' own templates, they craft several intermediate samples that incrementally bridge the distance between their own template and the legitimate user's. As these adversarial samples are added to the template, the attacker is eventually accepted alongside the legitimate user. To avoid detection, we design the attack to minimize the number of rejected samples. We design our method to cope with the weak assumptions for the attacker and we evaluate the effectiveness of this approach on state-of-the-art face recognition pipelines based on deep neural networks. We find that in scenarios where the deep network is known, adversaries can successfully carry out the attack over 70% of cases with less than ten injection attempts. Even in black-box scenarios, we find that exploiting the transferability of adversarial samples from surrogate models can lead to successful attacks in around 15% of cases. Finally, we design a poisoning detection technique that leverages the consistent directionality of template updates in feature space to discriminate between legitimate and malicious updates. We evaluate such a countermeasure with a set of intra-user variability factors which may present the same directionality characteristics, obtaining equal error rates for the detection between 7-14% and leading to over 99% of attacks being detected after only two sample injections.Comment: 12 page

    Artificial intelligence in the cyber domain: Offense and defense

    Get PDF
    Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41
    corecore