11 research outputs found
MalDet-Malware Detection Using Deep Learning and LSTM based Approach to Classify Malware
Computer security requires malware detection. Recent research manually uncovers hazardous features using machine learning-based techniques. MalDet, a cutting-edge malware detection method, is recommended in this paper. MalDet classifies malware using a stacking ensemble and learns from grayscale images and opcode sequences using CNN and LSTM networks. According to the evaluation, MalDet's malware detection validation accuracy is 99.89%. MalDet outperforms other previous research with 99.36% detection accuracy and a significant detection speedup on the Microsoft malware dataset. We classified nine malware families for MalDet
Data Augmentation Based Malware Detection using Convolutional Neural Networks
Recently, cyber-attacks have been extensively seen due to the everlasting
increase of malware in the cyber world. These attacks cause irreversible damage
not only to end-users but also to corporate computer systems. Ransomware
attacks such as WannaCry and Petya specifically targets to make critical
infrastructures such as airports and rendered operational processes inoperable.
Hence, it has attracted increasing attention in terms of volume, versatility,
and intricacy. The most important feature of this type of malware is that they
change shape as they propagate from one computer to another. Since standard
signature-based detection software fails to identify this type of malware
because they have different characteristics on each contaminated computer. This
paper aims at providing an image augmentation enhanced deep convolutional
neural network (CNN) models for the detection of malware families in a
metamorphic malware environment. The main contributions of the paper's model
structure consist of three components, including image generation from malware
samples, image augmentation, and the last one is classifying the malware
families by using a convolutional neural network model. In the first component,
the collected malware samples are converted binary representation to 3-channel
images using windowing technique. The second component of the system create the
augmented version of the images, and the last component builds a classification
model. In this study, five different deep convolutional neural network model
for malware family detection is used.Comment: 18 page
Recommended from our members
MDEA : malware detection with evolutionary adversarial learning
Many applications have used machine learning as a tool to detect malware. These
applications take in raw or processed binary data to feed neural network models to classify
benign or malicious files. Even though this approach has proved effective against dynamic
changes, such as encrypting, obfuscating and packing techniques, it is vulnerable to
specific evasion attacks to where that small changes to the input data cause
misclassification at test time. In this paper, I propose MDEA, an Adversarial Malware
Detection model that combines a neural network and evolutionary optimization attack
samples to make the network robust against evasion attacks. By retraining the model with
the evolved malware samples, network performance improves a big margin.Computer Science
A STATE OF THE ART SURVEY ON POLYMORPHIC MALWARE ANALYSIS AND DETECTION TECHNIQUES
Nowadays, systems are under serious security threats caused by malicious software, commonly known as malware. Such malwares are sophisticatedly created with advanced techniques that make them hard to analyse and detect, thus causing a lot of damages. Polymorphism is one of the advanced techniques by which malware change their identity on each time they attack. This paper presents a detailed systematic and critical review that explores the available literature, and outlines the research efforts that have been made in relation to polymorphic malware analysis and their detection
Нейромережевий модуль захисту інформаційних систем від генерації шкідливої інформації
Робота публікується згідно наказу Ректора НАУ від 27.05.2021 р. №311/од "Про розміщення кваліфікаційних робіт здобувачів вищої освіти в репозиторії університету" . Керівник проекту: професор Казмірчук С.В.Незважаючи на активні дії зі сторони виробників антивірусного програмного забезпечення, комп'ютерні віруси продовжують успішно проникати в комп'ютерні системи користувачів по всьому світу і виконувати шкідливі дії зі знищення або крадіжки інформації. Інформаційні системи змінюються з кожним роком, змінюється як апаратна, так і програмна частина. Але й разом з цим змінюються і способи потрапляння комп’ютерних вірусів та й самі зловмисні програми підлягають модифікації. Традиційні методи виявлення шкідливих програм, що застосовуються сьогодні, не здатні забезпечити надійний захист комп'ютерних систем від проникнення комп'ютерних вірусів.
Методи штучного інтелекту дозволяють створити принципово нові алгоритми виявлення шкідливих програм, що дозволить значно підвищити рівень захищеності комп'ютерних систем. Нейронна мережа дозволяє автоматизувати процес детектування вірусів. А також добре підходить для задач класифікації, для аналізу коду шкідливих файлів. Це значно знижує ресурсозатратність, але ефективність детектування шкідливої інформації збільшується. Первагами такого методу є те, що нейронні мережі можуть давати хороші результати, при доволі низьких показниках помилок
Cyber resilience in supply chain system security using machine learning for threat predictions
Purpose-
Cyber resilience in cyber supply chain (CSC) systems security has become inevitable as attacks, risks and vulnerabilities increase in real-time critical infrastructure systems with little time for system failures. Cyber resilience approaches ensure the ability of a supply chain system to prepare, absorb, recover and adapt to adverse effects in the complex CPS environment. However, threats within the CSC context can pose a severe disruption to the overall business continuity. The paper aims to use machine learning (ML) techniques to predict threats on cyber supply chain systems, improve cyber resilience that focuses on critical assets and reduce the attack surface.
Design/methodology/approach-
The approach follows two main cyber resilience design principles that focus on common critical assets and reduce the attack surface for this purpose. ML techniques are applied to various classification algorithms to learn a dataset for performance accuracies and threats predictions based on the CSC resilience design principles. The critical assets include Cyber Digital, Cyber Physical and physical elements. We consider Logistic Regression, Decision Tree, Naïve Bayes and Random Forest classification algorithms in a Majority Voting to predicate the results. Finally, we mapped the threats with known attacks for inferences to improve resilience on the critical assets.
Findings-
The paper contributes to CSC system resilience based on the understanding and prediction of the threats. The result shows a 70% performance accuracy for the threat prediction with cyber resilience design principles that focus on critical assets and controls and reduce the threat.
Research limitations/implications-
Therefore, there is a need to understand and predicate the threat so that appropriate control actions can ensure system resilience. However, due to the invincibility and dynamic nature of cyber attacks, there are limited controls and attributions. This poses serious implications for cyber supply chain systems and its cascading impacts.
Practical implications-
ML techniques are used on a dataset to analyse and predict the threats based on the CSC resilience design principles.
Social implications-
There are no social implications rather it has serious implications for organizations and third-party vendors.
Originality/value-
The originality of the paper lies in the fact that cyber resilience design principles that focus on common critical assets are used including Cyber Digital, Cyber Physical and physical elements to determine the attack surface. ML techniques are applied to various classification algorithms to learn a dataset for performance accuracies and threats predictions based on the CSC resilience design principles to reduce the attack surface for this purpose
Cyber resilience in supply chain system security using machine learning for threat predictions
Purpose
Cyber resilience in cyber supply chain (CSC) systems security has become inevitable as attacks, risks and vulnerabilities increase in real-time critical infrastructure systems with little time for system failures. Cyber resilience approaches ensure the ability of a supply chain system to prepare, absorb, recover and adapt to adverse effects in the complex CPS environment. However, threats within the CSC context can pose a severe disruption to the overall business continuity. The paper aims to use machine learning (ML) techniques to predict threats on cyber supply chain systems, improve cyber resilience that focuses on critical assets and reduce the attack surface.
Design/methodology/approach
The approach follows two main cyber resilience design principles that focus on common critical assets and reduce the attack surface for this purpose. ML techniques are applied to various classification algorithms to learn a dataset for performance accuracies and threats predictions based on the CSC resilience design principles. The critical assets include Cyber Digital, Cyber Physical and physical elements. We consider Logistic Regression, Decision Tree, Naïve Bayes and Random Forest classification algorithms in a Majority Voting to predicate the results. Finally, we mapped the threats with known attacks for inferences to improve resilience on the critical assets.
Findings
The paper contributes to CSC system resilience based on the understanding and prediction of the threats. The result shows a 70% performance accuracy for the threat prediction with cyber resilience design principles that focus on critical assets and controls and reduce the threat.
Research limitations/implications
Therefore, there is a need to understand and predicate the threat so that appropriate control actions can ensure system resilience. However, due to the invincibility and dynamic nature of cyber attacks, there are limited controls and attributions. This poses serious implications for cyber supply chain systems and its cascading impacts.
Practical implications
ML techniques are used on a dataset to analyse and predict the threats based on the CSC resilience design principles.
Social implications
There are no social implications rather it has serious implications for organizations and third-party vendors.
Originality/value
The originality of the paper lies in the fact that cyber resilience design principles that focus on common critical assets are used including Cyber Digital, Cyber Physical and physical elements to determine the attack surface. ML techniques are applied to various classification algorithms to learn a dataset for performance accuracies and threats predictions based on the CSC resilience design principles to reduce the attack surface for this purpose
A Hierarchical Temporal Memory Sequence Classifier for Streaming Data
Real-world data streams often contain concept drift and noise. Additionally, it is often the case that due to their very nature, these real-world data streams also include temporal dependencies between data. Classifying data streams with one or more of these characteristics is exceptionally challenging. Classification of data within data streams is currently the primary focus of research efforts in many fields (i.e., intrusion detection, data mining, machine learning). Hierarchical Temporal Memory (HTM) is a type of sequence memory that exhibits some of the predictive and anomaly detection properties of the neocortex. HTM algorithms conduct training through exposure to a stream of sensory data and are thus suited for continuous online learning. This research developed an HTM sequence classifier aimed at classifying streaming data, which contained concept drift, noise, and temporal dependencies. The HTM sequence classifier was fed both artificial and real-world data streams and evaluated using the prequential evaluation method. Cost measures for accuracy, CPU-time, and RAM usage were calculated for each data stream and compared against a variety of modern classifiers (e.g., Accuracy Weighted Ensemble, Adaptive Random Forest, Dynamic Weighted Majority, Leverage Bagging, Online Boosting ensemble, and Very Fast Decision Tree). The HTM sequence classifier performed well when the data streams contained concept drift, noise, and temporal dependencies, but was not the most suitable classifier of those compared against when provided data streams did not include temporal dependencies. Finally, this research explored the suitability of the HTM sequence classifier for detecting stalling code within evasive malware. The results were promising as they showed the HTM sequence classifier capable of predicting coding sequences of an executable file by learning the sequence patterns of the x86 EFLAGs register. The HTM classifier plotted these predictions in a cardiogram-like graph for quick analysis by reverse engineers of malware. This research highlights the potential of HTM technology for application in online classification problems and the detection of evasive malware