9,757 research outputs found

    StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection

    Full text link
    Over the years, most research towards defenses against adversarial attacks on machine learning models has been in the image recognition domain. The malware detection domain has received less attention despite its importance. Moreover, most work exploring these defenses has focused on several methods but with no strategy when applying them. In this paper, we introduce StratDef, which is a strategic defense system based on a moving target defense approach. We overcome challenges related to the systematic construction, selection, and strategic use of models to maximize adversarial robustness. StratDef dynamically and strategically chooses the best models to increase the uncertainty for the attacker while minimizing critical aspects in the adversarial ML domain, like attack transferability. We provide the first comprehensive evaluation of defenses against adversarial attacks on machine learning for malware detection, where our threat model explores different levels of threat, attacker knowledge, capabilities, and attack intensities. We show that StratDef performs better than other defenses even when facing the peak adversarial threat. We also show that, of the existing defenses, only a few adversarially-trained models provide substantially better protection than just using vanilla models but are still outperformed by StratDef

    Towards Adversarial Resilience in Proactive Detection of Botnet Domain Names by using MTD

    Get PDF
    Artificial Intelligence is often part of state-of-the-art Intrusion Detection Systems. However, attackers use Artificial Intelligence to improve their attacks and circumvent IDS systems. Botnets use artificial intelligence to improve their Domain Name Generation Algorithms. Botnets pose a serious threat to networks that are connected to the Internet and are an enabler for many cyber-criminal activities (e.g., DDoS attacks, banking fraud and cyber-espionage) and cause substantial economic damage. To circumvent detection and prevent takedown actions, bot-masters use DGAs to create, maintain and hide C&C infrastructures. Furthermore, botmasters often release its source code to prevent detection, leading to numerous similar botnets that are created and maintained by different botmasters. As these botnets are based on nearly the same source code basis, they often share similar observable behavior. Current work on detection of DGAs is often based on applying machine learning techniques, as they are capable to generalize and to also detect yet unknown derivatives of a known botnets. However, these machine learning based classifiers can be circumvented by applying adversarial learning techniques. As a consequence, there is a need for resilience against adversarial learning in current Intrusion Detection Systems. In our work, we focus on adversarial learning in DNS based IDSs from the perspective of a network operator. Further, we present our concept to make existing and future machine learning based IDSs more resilient against adversarial learning attacks by applying multi-level Moving Target Defense strategies

    MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

    Full text link
    Present attack methods can make state-of-the-art classification systems based on deep neural networks misclassify every adversarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for 'boosting' the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image, a trained network is picked randomly from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that this approach, MTDeep, reduces misclassification on perturbed images in various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.Comment: Accepted to the Conference on Decision and Game Theory for Security (GameSec), 201
    • …
    corecore