129 research outputs found
APT Adversarial Defence Mechanism for Industrial IoT Enabled Cyber-Physical System
The objective of Advanced Persistent Threat (APT) attacks is to exploit Cyber-Physical Systems (CPSs) in combination with the Industrial Internet of Things (I-IoT) by using fast attack methods. Machine learning (ML) techniques have shown potential in identifying APT attacks in autonomous and malware detection systems. However, detecting hidden APT attacks in the I-IoT-enabled CPS domain and achieving real-time accuracy in detection present significant challenges for these techniques. To overcome these issues, a new approach is suggested that is based on the Graph Attention Network (GAN), a multi-dimensional algorithm that captures behavioral features along with the relevant information that other methods do not deliver. This approach utilizes masked self-attentional layers to address the limitations of prior Deep Learning (DL) methods that rely on convolutions. Two datasets, the DAPT2020 malware, and Edge I-IoT datasets are used to evaluate the approach, and it attains the highest detection accuracy of 96.97% and 95.97%, with prediction time of 20.56 seconds and 21.65 seconds, respectively. The GAN approach is compared to conventional ML algorithms, and simulation results demonstrate a significant performance improvement over these algorithms in the I-IoT-enabled CPS realm
A Comparative Analysis of Machine Learning Techniques for IoT Intrusion Detection
The digital transformation faces tremendous security challenges. In particular, the growing number of cyber-attacks targeting Internet of Things (IoT) systems restates the need for a reliable detection of malicious network activity. This paper presents a comparative analysis of supervised, unsupervised and reinforcement learning techniques on nine malware captures of the IoT-23 dataset, considering both binary and multi-class classification scenarios. The developed models consisted of Support Vector Machine (SVM), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Isolation Forest (iForest), Local Outlier Factor (LOF) and a Deep Reinforcement Learning (DRL) model based on a Double Deep Q-Network (DDQIN), adapted to the intrusion detection context. The most reliable performance was achieved by LightGBM. Nonetheless, iForest displayed good anomaly detection results and the DRL model demonstrated the possible benefits of employing this methodology to continuously improve the detection. Overall, the obtained results indicate that the analyzed techniques are well suited for IoT intrusion detection.The present work was done and funded in the scope of the European Union’s
Horizon 2020 research and innovation program, under project SeCoIIA (grant agreement no.
871967). This work has also received funding from UIDP/00760/2020.info:eu-repo/semantics/publishedVersio
Anomaly Detection in Industrial Machinery using IoT Devices and Machine Learning: a Systematic Mapping
Anomaly detection is critical in the smart industry for preventing equipment
failure, reducing downtime, and improving safety. Internet of Things (IoT) has
enabled the collection of large volumes of data from industrial machinery,
providing a rich source of information for Anomaly Detection. However, the
volume and complexity of data generated by the Internet of Things ecosystems
make it difficult for humans to detect anomalies manually. Machine learning
(ML) algorithms can automate anomaly detection in industrial machinery by
analyzing generated data. Besides, each technique has specific strengths and
weaknesses based on the data nature and its corresponding systems. However, the
current systematic mapping studies on Anomaly Detection primarily focus on
addressing network and cybersecurity-related problems, with limited attention
given to the industrial sector. Additionally, these studies do not cover the
challenges involved in using ML for Anomaly Detection in industrial machinery
within the context of the IoT ecosystems. This paper presents a systematic
mapping study on Anomaly Detection for industrial machinery using IoT devices
and ML algorithms to address this gap. The study comprehensively evaluates 84
relevant studies spanning from 2016 to 2023, providing an extensive review of
Anomaly Detection research. Our findings identify the most commonly used
algorithms, preprocessing techniques, and sensor types. Additionally, this
review identifies application areas and points to future challenges and
research opportunities
Five Facets of 6G: Research Challenges and Opportunities
Whilst the fifth-generation (5G) systems are being rolled out across the
globe, researchers have turned their attention to the exploration of radical
next-generation solutions. At this early evolutionary stage we survey five main
research facets of this field, namely {\em Facet~1: next-generation
architectures, spectrum and services, Facet~2: next-generation networking,
Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing,
as well as Facet~5: applications of deep learning in 6G networks.} In this
paper, we have provided a critical appraisal of the literature of promising
techniques ranging from the associated architectures, networking, applications
as well as designs. We have portrayed a plethora of heterogeneous architectures
relying on cooperative hybrid networks supported by diverse access and
transmission mechanisms. The vulnerabilities of these techniques are also
addressed and carefully considered for highlighting the most of promising
future research directions. Additionally, we have listed a rich suite of
learning-driven optimization techniques. We conclude by observing the
evolutionary paradigm-shift that has taken place from pure single-component
bandwidth-efficiency, power-efficiency or delay-optimization towards
multi-component designs, as exemplified by the twin-component ultra-reliable
low-latency mode of the 5G system. We advocate a further evolutionary step
towards multi-component Pareto optimization, which requires the exploration of
the entire Pareto front of all optiomal solutions, where none of the components
of the objective function may be improved without degrading at least one of the
other components
Application of Machine Learning Approaches in Intrusion Detection System
The rapid development of technology reveals several safety concerns for making life more straightforward. The advance of the Internet over the years has increased the number of attacks on the Internet. The IDS is one supporting layer for data protection. Intrusion Detection Systems (IDS) offers a healthy market climate and prevents misgivings in the network. Recently, IDS is used to recognize and distinguish safety risks using Machine Learning (ML). This paper proposed a comparative analysis of the different ML algorithms used in IDS and aims to identify intrusions with SVM, J48, and Naive Bayes. Intrusion is also classified. Work with the KDD-CUP data set, and their performance has checked with the Weak software. In comparison of techniques such as J48, SVM and Naïve Bayes showed that the accuracy of j48 is the higher one which was (99.96%)
Recommended from our members
An anomaly-based intrusion detection system for internet of medical things networks
Over the past few years, the healthcare sector is being transformed due to the rise of the Internet of Things (IoT) and the introduction of the Internet of Medical Things (IoMT) technology, whose purpose is the improvement of the patient’s quality of life. Nevertheless, the heterogenous and resource-constrained characteristics of IoMT networks make them vulnerable to a wide range of threats. Thus, novel security mechanisms, such as accurate and efficient anomaly-based intrusion detection systems (AIDSs), considering the inherent limitations of the IoMT networks, need to be developed before IoMT networks reach their full potential in the market. Towards this direction, in this paper, we propose an efficient and effective anomaly-based intrusion detection system (AIDS) for IoMT networks. The proposed AIDS aims to leverage host-based and network-based techniques to reliably collect log files from the IoMT devices and the gateway, as well as traffic from the IoMT edge network, while taking into consideration the computational cost. The proposed AIDS is to rely on machine learning (ML) techniques, considering the computation overhead, in order to detect abnormalities in the collected data and thus identify malicious incidents in the IoMT network. A set of six popular ML algorithms was tested and evaluated for anomaly detection in the proposed AIDS, and the evaluation results showed which of them are the most suitable
Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G
The next wave of wireless technologies is proliferating in connecting things
among themselves as well as to humans. In the era of the Internet of things
(IoT), billions of sensors, machines, vehicles, drones, and robots will be
connected, making the world around us smarter. The IoT will encompass devices
that must wirelessly communicate a diverse set of data gathered from the
environment for myriad new applications. The ultimate goal is to extract
insights from this data and develop solutions that improve quality of life and
generate new revenue. Providing large-scale, long-lasting, reliable, and near
real-time connectivity is the major challenge in enabling a smart connected
world. This paper provides a comprehensive survey on existing and emerging
communication solutions for serving IoT applications in the context of
cellular, wide-area, as well as non-terrestrial networks. Specifically,
wireless technology enhancements for providing IoT access in fifth-generation
(5G) and beyond cellular networks, and communication networks over the
unlicensed spectrum are presented. Aligned with the main key performance
indicators of 5G and beyond 5G networks, we investigate solutions and standards
that enable energy efficiency, reliability, low latency, and scalability
(connection density) of current and future IoT networks. The solutions include
grant-free access and channel coding for short-packet communications,
non-orthogonal multiple access, and on-device intelligence. Further, a vision
of new paradigm shifts in communication networks in the 2030s is provided, and
the integration of the associated new technologies like artificial
intelligence, non-terrestrial networks, and new spectra is elaborated. Finally,
future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&
Addressing Pragmatic Challenges in Utilizing AI for Security of Industrial IoT
Industrial control systems (ICSs) are an essential part of every nation\u27s critical infrastructure and have been utilized for a long time to supervise industrial machines and processes. Today’s ICSs are substantially different from the information technology (IT) devices a decade ago. The integration of internet of things (IoT) technology has made them more efficient and optimized, improved automation, and increased quality and compliance. Now, they are a sub (and arguably the most critical) part of IoT\u27s domain, called industrial IoT (IIoT).
In the past, to secure ICSs from malicious outside attack, these systems were isolated from the outside world. However, recent advances, increased connectivity with corporate networks, and utilization of internet communications to transmit the information more conveniently have introduced the possibility of cyber-attacks against these systems. Due to the sensitive nature of the industrial applications, security is the foremost concern.
We discuss why despite the exceptional performance of artificial intelligent (AI) and machine learning (ML), industry leaders still have a hard time utilizing these models in practice as a standalone units. The goal of this dissertation is to address some of these challenges to help pave the way of utilizing smarter and more modern security solutions in these systems. To be specific, here, we focus on data scarcity for the AI, black-box nature of the AI, high computational load of the AI.
Industrial companies almost never release their network data, because they are obligated to follow confidentiality laws and user privacy restrictions. Hence, real-world IIoT datasets are not available for security research area, and we face a data scarcity challenge in IIoT security research community. In this domain, the researchers usually have to resort to commercial or public datasets that are not specific to this domain. In our work, we have developed a real-world testbed that resembles an actual industrial plant. We have emulated a popular industrial system in water treatment processes. So, we could collect datasets containing realist traffic to conduct our research.
There exists several specific characteristics of IIoT networks that are unique to them. We have provided an extensive study to figure out them and incorporate them in the design. We have gathered information on relevant cyber-attacks in IIoT systems to run them against the system to gather realistic datasets containing both normal and attack traffic analogous to real industrial network traffic. Their particular communication protocols are also their specific to them. We have implemented one of the most popular one in our dataset. Another attribute that distinguishes the security of these systems from others is the imbalanced data. The number of attack samples are significantly lower compared to the enormous number of normal traffic that flows in the system daily. We have made sure we build our datasets compliant with all the specific attributes of an IIoT.
Another challenge that we address here is the ``black box nature of learning models that creates hurdles in generating adequate trust in their decisions. Thus, they are seldom utilized as a standalone unit in IIoT high-risk applications. Explainable AI (XAI) has gained an increasing interest in recent years to help with this problem. However, most of the research works that have been done so far focus on image applications or are very slow. For applications such as security of IIoT, we deal with numerical data and low latency is of utmost importance. In this dissertation, we propose a universal XAI model named Transparency Relying Upon Statistical Theory (TRUST). TRUST is model-agnostic, high-performing, and suitable for numerical applications. We prove its superiority compared to another popular XAI model in performance regarding speed and being able to successfully reason the AI\u27s behavior.
When dealing with the IoT technology, especially industrial IoT, we deal with a massive amount of data streaming to and from the IoT devices. In addition, the availability and reliability constraints of industrial systems require them to operate at a fast pace and avoid creating any bottleneck in the system. High computational load of complex AI models might cause a burden by having to deal with a large number of data and producing the results not as fast as required. In this dissertation, we utilize distributed computing in the form of edge/cloud structure to address these problems. We propose Anomaly Detection using Distributed AI (ADDAI) that can easily span out geographically to cover a large number of IoT sources. Due to its distributed nature, it guarantees critical IIoT requirements such as high speed, robustness against a single point of failure, low communication overhead, privacy, and scalability. We formulate the communication cost which is minimized and the improvement in performance
- …