23 research outputs found

    X-ware: a proof of concept malware utilizing artificial intelligence

    Get PDF
    Recent years have witnessed a dramatic growth in utilizing computational intelligence techniques for various domains. Coherently, malicious actors are expected to utilize these techniques against current security solutions. Despite the importance of these new potential threats, there remains a paucity of evidence on leveraging these research literature techniques. This article investigates the possibility of combining artificial neural networks and swarm intelligence to generate a new type of malware. We successfully created a proof of concept malware named X-ware, which we tested against the Windows-based systems. Developing this proof of concept may allow us to identify this potential threat’s characteristics for developing mitigation methods in the future. Furthermore, a method for recording the virus’s behavior and propagation throughout a file system is presented. The proposed virus prototype acts as a swarm system with a neural network-integrated for operations. The virus’s behavioral data is recorded and shown under a complex network format to describe the behavior and communication of the swarm. This paper has demonstrated that malware strengthened with computational intelligence is a credible threat. We envisage that our study can be utilized to assist current and future security researchers to help in implementing more effective countermeasure

    MDEA: Malware Detection with Evolutionary Adversarial Learning

    Full text link
    Malware detection have used machine learning to detect malware in programs. These applications take in raw or processed binary data to neural network models to classify as benign or malicious files. Even though this approach has proven effective against dynamic changes, such as encrypting, obfuscating and packing techniques, it is vulnerable to specific evasion attacks where that small changes in the input data cause misclassification at test time. This paper proposes a new approach: MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks. By retraining the model with the evolved malware samples, its performance improves a significant margin.Comment: 8 pages, 6 figure

    DRLDO A Novel DRL based De obfuscation System for Defence Against Metamorphic Malware

    Get PDF
    In this paper, we propose a novel mechanism to normalise metamorphic and obfuscated malware down at the opcode level and hence create an advanced metamorphic malware de-obfuscation and defence system. We name this system as DRLDO, for deep reinforcement learning based de-obfuscator. With the inclusion of the DRLDO as a sub-component, an existing Intrusion Detection System could be augmented with defensive capabilities against ‘zero-day’ attack from obfuscated and metamorphic variants of existing malware. This gains importance, not only because there exists no system till date that use advance DRL to intelligently and automatically normalise obfuscation down even to the opcode level, but also because the DRLDO system does not mandate any changes to the existing IDS. The DRLDO system does not even mandate the IDS’ classifier to be retrained with any new dataset containing obfuscated samples. Hence DRLDO could be easily retrofitted into any existing IDS deployment. We designed, developed, and conducted experiments on the system to evaluate the same against multiple-simultaneous attacks from obfuscations generated from malware samples from a standardised dataset that contain multiple generations of malware. Experimental results prove that DRLDO was able to successfully make the otherwise undetectable obfuscated variants of the malware detectable by an existing pre-trained malware classifier. The detection probability was raised well above the cut-off mark to 0.6 for the classifier to detect the obfuscated malware unambiguously. Further, the de-obfuscated variants generated by DRLDO achieved a very high correlation (of ≈ 0.99) with the base malware. This observation validates that the DRLDO system is actually learning to de-obfuscate and not exploiting a trivial trick

    Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability

    Full text link
    In recent years, the topic of explainable machine learning (ML) has been extensively researched. Up until now, this research focused on regular ML users use-cases such as debugging a ML model. This paper takes a different posture and show that adversaries can leverage explainable ML to bypass multi-feature types malware classifiers. Previous adversarial attacks against such classifiers only add new features and not modify existing ones to avoid harming the modified malware executable's functionality. Current attacks use a single algorithm that both selects which features to modify and modifies them blindly, treating all features the same. In this paper, we present a different approach. We split the adversarial example generation task into two parts: First we find the importance of all features for a specific sample using explainability algorithms, and then we conduct a feature-specific modification, feature-by-feature. In order to apply our attack in black-box scenarios, we introduce the concept of transferability of explainability, that is, applying explainability algorithms to different classifiers using different features subsets and trained on different datasets still result in a similar subset of important features. We conclude that explainability algorithms can be leveraged by adversaries and thus the advocates of training more interpretable classifiers should consider the trade-off of higher vulnerability of those classifiers to adversarial attacks.Comment: Accepted as a conference paper at IJCNN 202
    corecore