835 research outputs found
Towards a robust, effective and resource efficient machine learning technique for IoT security monitoring.
The application of Deep Neural Networks (DNNs) for monitoring cyberattacks in Internet of Things (IoT) systems has gained significant attention in recent years. However, achieving optimal detection performance through DNN training has posed challenges due to computational intensity and vulnerability to adversarial samples. To address these issues, this paper introduces an optimization method that combines regularization and simulated micro-batching. This approach enables the training of DNNs in a robust, efficient, and resource-friendly manner for IoT security monitoring. Experimental results demonstrate that the proposed DNN model, including its performance in Federated Learning (FL) settings, exhibits improved attack detection and resistance to adversarial perturbations compared to benchmark baseline models and conventional Machine Learning (ML) methods typically employed in IoT security monitoring. Notably, the proposed method achieves significant reductions of 79.54% and 21.91% in memory and time usage, respectively, when compared to the benchmark baseline in simulated virtual worker environments. Moreover, in realistic testbed scenarios, the proposed method reduces memory footprint by 6.05% and execution time by 15.84%, while maintaining accuracy levels that are superior or comparable to state-of-the-art methods. These findings validate the feasibility and effectiveness of the proposed optimization method for enhancing the efficiency and robustness of DNN-based IoT security monitoring
Machine Learning-Enabled IoT Security: Open Issues and Challenges Under Advanced Persistent Threats
Despite its technological benefits, Internet of Things (IoT) has cyber
weaknesses due to the vulnerabilities in the wireless medium. Machine learning
(ML)-based methods are widely used against cyber threats in IoT networks with
promising performance. Advanced persistent threat (APT) is prominent for
cybercriminals to compromise networks, and it is crucial to long-term and
harmful characteristics. However, it is difficult to apply ML-based approaches
to identify APT attacks to obtain a promising detection performance due to an
extremely small percentage among normal traffic. There are limited surveys to
fully investigate APT attacks in IoT networks due to the lack of public
datasets with all types of APT attacks. It is worth to bridge the
state-of-the-art in network attack detection with APT attack detection in a
comprehensive review article. This survey article reviews the security
challenges in IoT networks and presents the well-known attacks, APT attacks,
and threat models in IoT systems. Meanwhile, signature-based, anomaly-based,
and hybrid intrusion detection systems are summarized for IoT networks. The
article highlights statistical insights regarding frequently applied ML-based
methods against network intrusion alongside the number of attacks types
detected. Finally, open issues and challenges for common network intrusion and
APT attacks are presented for future research.Comment: ACM Computing Surveys, 2022, 35 pages, 10 Figures, 8 Table
Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges
Artificial General Intelligence (AGI), possessing the capacity to comprehend,
learn, and execute tasks with human cognitive abilities, engenders significant
anticipation and intrigue across scientific, commercial, and societal arenas.
This fascination extends particularly to the Internet of Things (IoT), a
landscape characterized by the interconnection of countless devices, sensors,
and systems, collectively gathering and sharing data to enable intelligent
decision-making and automation. This research embarks on an exploration of the
opportunities and challenges towards achieving AGI in the context of the IoT.
Specifically, it starts by outlining the fundamental principles of IoT and the
critical role of Artificial Intelligence (AI) in IoT systems. Subsequently, it
delves into AGI fundamentals, culminating in the formulation of a conceptual
framework for AGI's seamless integration within IoT. The application spectrum
for AGI-infused IoT is broad, encompassing domains ranging from smart grids,
residential environments, manufacturing, and transportation to environmental
monitoring, agriculture, healthcare, and education. However, adapting AGI to
resource-constrained IoT settings necessitates dedicated research efforts.
Furthermore, the paper addresses constraints imposed by limited computing
resources, intricacies associated with large-scale IoT communication, as well
as the critical concerns pertaining to security and privacy
SETTI: A Self-supervised Adversarial Malware Detection Architecture in an IoT Environment
In recent years, malware detection has become an active research topic in the
area of Internet of Things (IoT) security. The principle is to exploit
knowledge from large quantities of continuously generated malware. Existing
algorithms practice available malware features for IoT devices and lack
real-time prediction behaviors. More research is thus required on malware
detection to cope with real-time misclassification of the input IoT data.
Motivated by this, in this paper we propose an adversarial self-supervised
architecture for detecting malware in IoT networks, SETTI, considering samples
of IoT network traffic that may not be labeled. In the SETTI architecture, we
design three self-supervised attack techniques, namely Self-MDS, GSelf-MDS and
ASelf-MDS. The Self-MDS method considers the IoT input data and the adversarial
sample generation in real-time. The GSelf-MDS builds a generative adversarial
network model to generate adversarial samples in the self-supervised structure.
Finally, ASelf-MDS utilizes three well-known perturbation sample techniques to
develop adversarial malware and inject it over the self-supervised
architecture. Also, we apply a defence method to mitigate these attacks, namely
adversarial self-supervised training to protect the malware detection
architecture against injecting the malicious samples. To validate the attack
and defence algorithms, we conduct experiments on two recent IoT datasets:
IoT23 and NBIoT. Comparison of the results shows that in the IoT23 dataset, the
Self-MDS method has the most damaging consequences from the attacker's point of
view by reducing the accuracy rate from 98% to 74%. In the NBIoT dataset, the
ASelf-MDS method is the most devastating algorithm that can plunge the accuracy
rate from 98% to 77%.Comment: 20 pages, 6 figures, 2 Tables, Submitted to ACM Transactions on
Multimedia Computing, Communications, and Application
Exploring the Landscape of Ubiquitous In-home Health Monitoring: A Comprehensive Survey
Ubiquitous in-home health monitoring systems have become popular in recent
years due to the rise of digital health technologies and the growing demand for
remote health monitoring. These systems enable individuals to increase their
independence by allowing them to monitor their health from the home and by
allowing more control over their well-being. In this study, we perform a
comprehensive survey on this topic by reviewing a large number of literature in
the area. We investigate these systems from various aspects, namely sensing
technologies, communication technologies, intelligent and computing systems,
and application areas. Specifically, we provide an overview of in-home health
monitoring systems and identify their main components. We then present each
component and discuss its role within in-home health monitoring systems. In
addition, we provide an overview of the practical use of ubiquitous
technologies in the home for health monitoring. Finally, we identify the main
challenges and limitations based on the existing literature and provide eight
recommendations for potential future research directions toward the development
of in-home health monitoring systems. We conclude that despite extensive
research on various components needed for the development of effective in-home
health monitoring systems, the development of effective in-home health
monitoring systems still requires further investigation.Comment: 35 pages, 5 figure
Role of Artificial Intelligence in the Internet of Things (IoT) Cybersecurity
In recent years, the use of the Internet of Things (IoT) has increased exponentially, and cybersecurity concerns have increased along with it. On the cutting edge of cybersecurity is Artificial Intelligence (AI), which is used for the development of complex algorithms to protect networks and systems, including IoT systems. However, cyber-attackers have figured out how to exploit AI and have even begun to use adversarial AI in order to carry out cybersecurity attacks. This review paper compiles information from several other surveys and research papers regarding IoT, AI, and attacks with and against AI and explores the relationship between these three topics with the purpose of comprehensively presenting and summarizing relevant literature in these fields
Robust Learning Enabled Intelligence for the Internet-of-Things: A Survey From the Perspectives of Noisy Data and Adversarial Examples
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordThe Internet-of-Things (IoT) has been widely adopted in a range of verticals, e.g., automation, health, energy and manufacturing. Many of the applications in these sectors, such as self-driving cars and remote surgery, are critical and high stakes applications, calling for advanced machine learning (ML) models for data analytics. Essentially, the training and testing data that are collected by massive IoT devices may contain noise (e.g., abnormal data, incorrect labels and incomplete information) and adversarial examples. This requires high robustness of ML models to make reliable decisions for IoT applications. The research of robust ML has received tremendous attentions from both academia and industry in recent years. This paper will investigate the state-of-the-art and representative works of robust ML models that can enable high resilience and reliability of IoT intelligence. Two aspects of robustness will be focused on, i.e., when the training data of ML models contains noises and adversarial examples, which may typically happen in many real-world IoT scenarios. In addition, the reliability of both neural networks and reinforcement learning framework will be investigated. Both of these two machine learning paradigms have been widely used in handling data in IoT scenarios. The potential research challenges and open issues will be discussed to provide future research directions.Engineering and Physical Sciences Research Council (EPSRC
MalBoT-DRL: Malware botnet detection using deep reinforcement learning in IoT networks
In the dynamic landscape of cyber threats, multi-stage malware botnets have surfaced as significant threats of concern. These sophisticated threats can exploit Internet of Things (IoT) devices to undertake an array of cyberattacks, ranging from basic infections to complex operations such as phishing, cryptojacking, and distributed denial of service (DDoS) attacks. Existing machine learning solutions are often constrained by their limited generalizability across various datasets and their inability to adapt to the mutable patterns of malware attacks in real world environments, a challenge known as model drift. This limitation highlights the pressing need for adaptive Intrusion Detection Systems (IDS), capable of adjusting to evolving threat patterns and new or unseen attacks. This paper introduces MalBoT-DRL, a robust malware botnet detector using deep reinforcement learning. Designed to detect botnets throughout their entire lifecycle, MalBoT-DRL has better generalizability and offers a resilient solution to model drift. This model integrates damped incremental statistics with an attention rewards mechanism, a combination that has not been extensively explored in literature. This integration enables MalBoT-DRL to dynamically adapt to the ever-changing malware patterns within IoT environments. The performance of MalBoT-DRL has been validated via trace-driven experiments using two representative datasets, MedBIoT and N-BaIoT, resulting in exceptional average detection rates of 99.80% and 99.40% in the early and late detection phases, respectively. To the best of our knowledge, this work introduces one of the first studies to investigate the efficacy of reinforcement learning in enhancing the generalizability of IDS
- …