45 research outputs found
Artificial intelligence in the cyber domain: Offense and defense
Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41
MDEA: Malware Detection with Evolutionary Adversarial Learning
Malware detection have used machine learning to detect malware in programs.
These applications take in raw or processed binary data to neural network
models to classify as benign or malicious files. Even though this approach has
proven effective against dynamic changes, such as encrypting, obfuscating and
packing techniques, it is vulnerable to specific evasion attacks where that
small changes in the input data cause misclassification at test time. This
paper proposes a new approach: MDEA, an Adversarial Malware Detection model
uses evolutionary optimization to create attack samples to make the network
robust against evasion attacks. By retraining the model with the evolved
malware samples, its performance improves a significant margin.Comment: 8 pages, 6 figure
The Threat of Offensive AI to Organizations
AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can use AI to enhance their attacks and expand their campaigns.
Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations. For example, how does an AI-capable adversary impact the cyber kill chain? Does AI benefit the attacker more than the defender? What are the most significant AI threats facing organizations today and what will be their impact on the future?
In this study, we explore the threat of offensive AI on organizations. First, we present the background and discuss how AI changes the adversary’s methods, strategies, goals, and overall attack model. Then, through a literature review, we identify 32 offensive AI capabilities which adversaries can use to enhance their attacks. Finally, through a panel survey spanning industry, government and academia, we rank the AI threats and provide insights on the adversaries
Query-Free Evasion Attacks Against Machine Learning-Based Malware Detectors with Generative Adversarial Networks
Malware detectors based on machine learning (ML) have been shown to be
susceptible to adversarial malware examples. However, current methods to
generate adversarial malware examples still have their limits. They either rely
on detailed model information (gradient-based attacks), or on detailed outputs
of the model - such as class probabilities (score-based attacks), neither of
which are available in real-world scenarios. Alternatively, adversarial
examples might be crafted using only the label assigned by the detector
(label-based attack) to train a substitute network or an agent using
reinforcement learning. Nonetheless, label-based attacks might require querying
a black-box system from a small number to thousands of times, depending on the
approach, which might not be feasible against malware detectors. This work
presents a novel query-free approach to craft adversarial malware examples to
evade ML-based malware detectors. To this end, we have devised a GAN-based
framework to generate adversarial malware examples that look similar to benign
executables in the feature space. To demonstrate the suitability of our
approach we have applied the GAN-based attack to three common types of features
usually employed by static ML-based malware detectors: (1) Byte histogram
features, (2) API-based features, and (3) String-based features. Results show
that our model-agnostic approach performs on par with MalGAN, while generating
more realistic adversarial malware examples without requiring any query to the
malware detectors. Furthermore, we have tested the generated adversarial
examples against state-of-the-art multimodal and deep learning malware
detectors, showing a decrease in detection performance, as well as a decrease
in the average number of detections by the anti-malware engines in VirusTotal
A Deep-Learning Based Robust Framework Against Adversarial P.E. and Cryptojacking Malware
This graduate thesis introduces novel, deep-learning based frameworks that are resilient to adversarial P.E. and cryptojacking malware. We propose a method that uses a convolutional neural network (CNN) to classify image representations of malware, that provides robustness against numerous adversarial attacks. Our evaluation concludes that the image-based malware classifier is significantly more robust to adversarial attacks than a state-of-the-art ML-based malware classifier, and remarkably drops the evasion rate of adversarial samples to 0% in certain attacks. Further, we develop MINOS, a novel, lightweight cryptojacking detection system that accurately detects the presence of unwarranted mining activity in real-time. MINOS can detect mining activity with a low TNR and FPR, in an average of 25.9 milliseconds while using a maximum of 4% of CPU and 6.5% of RAM. Therefore, it can be concluded that the frameworks presented in this thesis attain high accuracy, are computationally inexpensive, and are resistant to adversarial perturbations