2 research outputs found
Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection
As an increasing number of deep-learning-based malware scanners have been
proposed, the existing evasion techniques, including code obfuscation and
polymorphic malware, are found to be less effective. In this work, we propose a
reinforcement learning-based semantics-preserving
(i.e.functionality-preserving) attack against black-box GNNs (GraphNeural
Networks) for malware detection. The key factor of adversarial malware
generation via semantic Nops insertion is to select the appropriate
semanticNopsand their corresponding basic blocks. The proposed attack uses
reinforcement learning to automatically make these "how to select" decisions.
To evaluate the attack, we have trained two kinds of GNNs with five types(i.e.,
Backdoor, Trojan-Downloader, Trojan-Ransom, Adware, and Worm) of Windows
malware samples and various benign Windows programs. The evaluation results
have shown that the proposed attack can achieve a significantly higher evasion
rate than three baseline attacks, namely the semantics-preserving random
instruction insertion attack, the semantics-preserving accumulative instruction
insertion attack, and the semantics-preserving gradient-based instruction
insertion attack
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection
Recent work has shown that adversarial Windows malware samples - referred to
as adversarial EXEmples in this paper - can bypass machine learning-based
detection relying on static code analysis by perturbing relatively few input
bytes. To preserve malicious functionality, previous attacks either add bytes
to existing non-functional areas of the file, potentially limiting their
effectiveness, or require running computationally-demanding validation steps to
discard malware variants that do not correctly execute in sandbox environments.
In this work, we overcome these limitations by developing a unifying framework
that does not only encompass and generalize previous attacks against
machine-learning models, but also includes three novel attacks based on
practical, functionality-preserving manipulations to the Windows Portable
Executable (PE) file format. These attacks, named Full DOS, Extend and Shift,
inject the adversarial payload by respectively manipulating the DOS header,
extending it, and shifting the content of the first section. Our experimental
results show that these attacks outperform existing ones in both white-box and
black-box scenarios, achieving a better trade-off in terms of evasion rate and
size of the injected payload, while also enabling evasion of models that have
been shown to be robust to previous attacks. To facilitate reproducibility of
our findings, we open source our framework and all the corresponding attack
implementations as part of the secml-malware Python library. We conclude this
work by discussing the limitations of current machine learning-based malware
detectors, along with potential mitigation strategies based on embedding domain
knowledge coming from subject-matter experts directly into the learning
process