331 research outputs found

    GUIDE FOR THE COLLECTION OF INSTRUSION DATA FOR MALWARE ANALYSIS AND DETECTION IN THE BUILD AND DEPLOYMENT PHASE

    Get PDF
    During the COVID-19 pandemic, when most businesses were not equipped for remote work and cloud computing, we saw a significant surge in ransomware attacks. This study aims to utilize machine learning and artificial intelligence to prevent known and unknown malware threats from being exploited by threat actors when developers build and deploy applications to the cloud. This study demonstrated an experimental quantitative research design using Aqua. The experiment\u27s sample is a Docker image. Aqua checked the Docker image for malware, sensitive data, Critical/High vulnerabilities, misconfiguration, and OSS license. The data collection approach is experimental. Our analysis of the experiment demonstrated how unapproved images were prevented from running anywhere in our environment based on known vulnerabilities, embedded secrets, OSS licensing, dynamic threat analysis, and secure image configuration. In addition to the experiment, the forensic data collected in the build and deployment phase are exploitable vulnerability, Critical/High Vulnerability Score, Misconfiguration, Sensitive Data, and Root User (Super User). Since Aqua generates a detailed audit record for every event during risk assessment and runtime, we viewed two events on the Audit page for our experiment. One of the events caused an alert due to two failed controls (Vulnerability Score, Super User), and the other was a successful event meaning that the image is secure to deploy in the production environment. The primary finding for our study is the forensic data associated with the two events on the Audit page in Aqua. In addition, Aqua validated our security controls and runtime policies based on the forensic data with both events on the Audit page. Finally, the study’s conclusions will mitigate the likelihood that organizations will fall victim to ransomware by mitigating and preventing the total damage caused by a malware attack

    MalBoT-DRL: Malware botnet detection using deep reinforcement learning in IoT networks

    Get PDF
    In the dynamic landscape of cyber threats, multi-stage malware botnets have surfaced as significant threats of concern. These sophisticated threats can exploit Internet of Things (IoT) devices to undertake an array of cyberattacks, ranging from basic infections to complex operations such as phishing, cryptojacking, and distributed denial of service (DDoS) attacks. Existing machine learning solutions are often constrained by their limited generalizability across various datasets and their inability to adapt to the mutable patterns of malware attacks in real world environments, a challenge known as model drift. This limitation highlights the pressing need for adaptive Intrusion Detection Systems (IDS), capable of adjusting to evolving threat patterns and new or unseen attacks. This paper introduces MalBoT-DRL, a robust malware botnet detector using deep reinforcement learning. Designed to detect botnets throughout their entire lifecycle, MalBoT-DRL has better generalizability and offers a resilient solution to model drift. This model integrates damped incremental statistics with an attention rewards mechanism, a combination that has not been extensively explored in literature. This integration enables MalBoT-DRL to dynamically adapt to the ever-changing malware patterns within IoT environments. The performance of MalBoT-DRL has been validated via trace-driven experiments using two representative datasets, MedBIoT and N-BaIoT, resulting in exceptional average detection rates of 99.80% and 99.40% in the early and late detection phases, respectively. To the best of our knowledge, this work introduces one of the first studies to investigate the efficacy of reinforcement learning in enhancing the generalizability of IDS

    Quantifying the need for supervised machine learning in conducting live forensic analysis of emergent configurations (ECO) in IoT environments

    Get PDF
    © 2020 The Author(s) Machine learning has been shown as a promising approach to mine larger datasets, such as those that comprise data from a broad range of Internet of Things devices, across complex environment(s) to solve different problems. This paper surveys existing literature on the potential of using supervised classical machine learning techniques, such as K-Nearest Neigbour, Support Vector Machines, Naive Bayes and Random Forest algorithms, in performing live digital forensics for different IoT configurations. There are also a number of challenges associated with the use of machine learning techniques, as discussed in this paper

    Navigating the IoT landscape: Unraveling forensics, security issues, applications, research challenges, and future

    Full text link
    Given the exponential expansion of the internet, the possibilities of security attacks and cybercrimes have increased accordingly. However, poorly implemented security mechanisms in the Internet of Things (IoT) devices make them susceptible to cyberattacks, which can directly affect users. IoT forensics is thus needed for investigating and mitigating such attacks. While many works have examined IoT applications and challenges, only a few have focused on both the forensic and security issues in IoT. Therefore, this paper reviews forensic and security issues associated with IoT in different fields. Future prospects and challenges in IoT research and development are also highlighted. As demonstrated in the literature, most IoT devices are vulnerable to attacks due to a lack of standardized security measures. Unauthorized users could get access, compromise data, and even benefit from control of critical infrastructure. To fulfil the security-conscious needs of consumers, IoT can be used to develop a smart home system by designing a FLIP-based system that is highly scalable and adaptable. Utilizing a blockchain-based authentication mechanism with a multi-chain structure can provide additional security protection between different trust domains. Deep learning can be utilized to develop a network forensics framework with a high-performing system for detecting and tracking cyberattack incidents. Moreover, researchers should consider limiting the amount of data created and delivered when using big data to develop IoT-based smart systems. The findings of this review will stimulate academics to seek potential solutions for the identified issues, thereby advancing the IoT field.Comment: 77 pages, 5 figures, 5 table

    IoT Dataset Validation Using Machine Learning Techniques for Traffic Anomaly Detection

    Get PDF
    This article belongs to the Special Issue Sensor Network Technologies and Applications with Wireless Sensor Devices[Abstract] With advancements in engineering and science, the application of smart systems is increasing, generating a faster growth of the IoT network traffic. The limitations due to IoT restricted power and computing devices also raise concerns about security vulnerabilities. Machine learning-based techniques have recently gained credibility in a successful application for the detection of network anomalies, including IoT networks. However, machine learning techniques cannot work without representative data. Given the scarcity of IoT datasets, the DAD emerged as an instrument for knowing the behavior of dedicated IoT-MQTT networks. This paper aims to validate the DAD dataset by applying Logistic Regression, Naive Bayes, Random Forest, AdaBoost, and Support Vector Machine to detect traffic anomalies in IoT. To obtain the best results, techniques for handling unbalanced data, feature selection, and grid search for hyperparameter optimization have been used. The experimental results show that the proposed dataset can achieve a high detection rate in all the experiments, providing the best mean accuracy of 0.99 for the tree-based models, with a low false-positive rate, ensuring effective anomaly detection.This project was funded by the Accreditation, Structuring, and Improvement of Consolidated Research Units and Singular Centers (ED431G/01), funded by Vocational Training of the Xunta de Galicia endowed with EU FEDER funds and Spanish Ministry of Science and Innovation, via the project PID2019-111388GB-I00Xunta de Galicia; ED431G/0

    AIS for Malware Detection in a Realistic IoT System: Challenges and Opportunities

    Get PDF
    With the expansion of the digital world, the number of Internet of things (IoT) devices is evolving dramatically. IoT devices have limited computational power and a small memory. Consequently, existing and complex security methods are not suitable to detect unknown malware attacks in IoT networks. This has become a major concern in the advent of increasingly unpredictable and innovative cyberattacks. In this context, artificial immune systems (AISs) have emerged as an effective malware detection mechanism with low requirements for computation and memory. In this research, we first validate the malware detection results of a recent AIS solution using multiple datasets with different types of malware attacks. Next, we examine the potential gains and limitations of promising AIS solutions under realistic implementation scenarios. We design a realistic IoT framework mimicking real-life IoT system architectures. The objective is to evaluate the AIS solutions’ performance with regard to the system constraints. We demonstrate that AIS solutions succeed in detecting unknown malware in the most challenging conditions. Furthermore, the systemic results with different system architectures reveal the AIS solutions’ ability to transfer learning between IoT devices. Transfer learning is a pivotal feature in the presence of highly constrained devices in the network. More importantly, this work highlights that previously published AIS performance results, which were obtained in a simulation environment, cannot be taken at face value. In reality, AIS’s malware detection accuracy for IoT systems is 91% in the most restricted designed system compared to the 99% accuracy rate reported in the simulation experiment

    Performance Evaluation of an Intelligent and Optimized Machine Learning Framework for Attack Detection

    Get PDF
    In current decades, the size and complexity of network traffic data have risen significantly, which increases the likelihood of network penetration. One of today's largest advanced security concerns is the botnet. They are the mechanisms behind several online assaults, including Distribute Denial of Service (DDoS), spams, rebate fraudulence, phishing as well as malware attacks. Several methodologies have been created over time to address these issues. Existing intrusion detection techniques have trouble in processing data from speedy networks and are unable to identify recently launched assaults. Ineffective network traffic categorization has been slowed down by repetitive and pointless characteristics. By identifying the critical attributes and removing the unimportant ones using a feature selection approach could indeed reduce the feature space dimensionality and resolve the problem.Therefore, this articledevelops aninnovative network attack recognitionmodel combining an optimization strategy with machine learning framework namely, Grey Wolf with Artificial Bee Colony optimization-based Support Vector Machine (GWABC-SVM) model. The efficient selection of attributes is accomplished using a novel Grey wolf with artificial bee colony optimization approach and finally the Botnet DDoS attack detection is accomplished through Support Vector machine.This articleconducted an experimental assessment of the machine learning approachesfor UNBS-NB 15 and KDD99 databases for Botnet DDoS attack identification. The proposed optimized machine learning (ML) based network attack detection framework is evaluated in the last phase for its effectiveness in detecting the possible threats. The main advantage of employing SVM is that it offers a wide range of possibilities for intrusion detection program development for difficult complicated situations like cloud computing. In comparison to conventional ML-based models, the suggested technique has a better detection rate of 99.62% and is less time-consuming and robust

    Advancing IoT Security with Tsetlin Machines: A Resource-Efficient Anomaly Detection Approach

    Get PDF
    The number of IoT devices are rapidly increasing, and the nature of the devices leave them vulnerable to attacks. As of today there are no general security solutions that meet the requirements of running with limited resources on devices with a large variety of use cases. Traditional AI models are able to classify and distinguish between benign and malignant network traffic. However, they require more resources than IoT devices can provide, and cannot train on-chip once deployed. This thesis introduces the Tsetlin Machine as a potential solution to this problem. As a binary, propositional logic model, the Tsetlin Machine is compatible with hardware and can perform predictions in near real-time on limited resources, making it a suitable candidate for intrusion detection in IoT devices. To assess the viability of the Tsetlin Machine as an IDS, we developed custom data loaders for the benchmark datasets: CIC-IDS2017, KDD99, NSL-KDD, UNSW-NB15, and UNSW-Bot-IoT. We ran hyperparameter searches and numerous experiments to determine the performance of the Tsetlin machine on each dataset. We discovered that preprocessing data by converting each data value to a 32-bit binary number and imposing an upper bound on class sizes proved to be an effective strategy. Furthermore, we compared the performance of the Tsetlin Machine against various classifiers from the scikit-learn library and lazy predict. The results show that the Tsetlin Machine's performance was on par with, if not superior to, other machine learning models, indicating its potential as a reliable method for anomaly detection in IoT devices. However, future work is required to determine its viability in a real-life setting, running on limited resources and classifying real-time data

    Assessing and augmenting SCADA cyber security: a survey of techniques

    Get PDF
    SCADA systems monitor and control critical infrastructures of national importance such as power generation and distribution, water supply, transportation networks, and manufacturing facilities. The pervasiveness, miniaturisations and declining costs of internet connectivity have transformed these systems from strictly isolated to highly interconnected networks. The connectivity provides immense benefits such as reliability, scalability and remote connectivity, but at the same time exposes an otherwise isolated and secure system, to global cyber security threats. This inevitable transformation to highly connected systems thus necessitates effective security safeguards to be in place as any compromise or downtime of SCADA systems can have severe economic, safety and security ramifications. One way to ensure vital asset protection is to adopt a viewpoint similar to an attacker to determine weaknesses and loopholes in defences. Such mind sets help to identify and fix potential breaches before their exploitation. This paper surveys tools and techniques to uncover SCADA system vulnerabilities. A comprehensive review of the selected approaches is provided along with their applicability
    • …
    corecore