239 research outputs found

    Design of Multi-View Based Email Classification for IoT Systems via Semi-Supervised Learning

    Get PDF
    Suspicious emails are one big threat for Internet of Things (IoT) security, which aim to induce users to click and then redirect them to a phishing webpage. To protect IoT systems, email classification is an essential mechanism to classify spam and legitimate emails. In the literature, most email classification approaches adopt supervised learning algorithms that require a large number of labeled data for classifier training. However, data labeling is very time consuming and expensive, making only a very small set of data available in practice, which would greatly degrade the effectiveness of email classification. To mitigate this problem, in this work, we develop an email classification approach based on multi-view disagreement-based semi-supervised learning. The idea behind is that multi-view method can offer richer information for classification, which is often ignored by literature. The use of semi-supervised learning can help leverage both labeled and unlabeled data. In the evaluation, we investigate the performance of our proposed approach with datasets and in real network environments. Experimental results demonstrate that multi-view can achieve better classification performance than single view, and that our approach can achieve better performance as compared to the existing similar algorithms

    Machine Learning-Enabled IoT Security: Open Issues and Challenges Under Advanced Persistent Threats

    Full text link
    Despite its technological benefits, Internet of Things (IoT) has cyber weaknesses due to the vulnerabilities in the wireless medium. Machine learning (ML)-based methods are widely used against cyber threats in IoT networks with promising performance. Advanced persistent threat (APT) is prominent for cybercriminals to compromise networks, and it is crucial to long-term and harmful characteristics. However, it is difficult to apply ML-based approaches to identify APT attacks to obtain a promising detection performance due to an extremely small percentage among normal traffic. There are limited surveys to fully investigate APT attacks in IoT networks due to the lack of public datasets with all types of APT attacks. It is worth to bridge the state-of-the-art in network attack detection with APT attack detection in a comprehensive review article. This survey article reviews the security challenges in IoT networks and presents the well-known attacks, APT attacks, and threat models in IoT systems. Meanwhile, signature-based, anomaly-based, and hybrid intrusion detection systems are summarized for IoT networks. The article highlights statistical insights regarding frequently applied ML-based methods against network intrusion alongside the number of attacks types detected. Finally, open issues and challenges for common network intrusion and APT attacks are presented for future research.Comment: ACM Computing Surveys, 2022, 35 pages, 10 Figures, 8 Table

    A Regularized Cross-Layer Ladder Network for Intrusion Detection in Industrial Internet-of-Things

    Get PDF
    As part of BigData trends, the ubiquitous use of the Internet-of-Things (IoT) in the industrial environment has generated a significant amount of network traffic. In this type of IoT industrial network where there is a large equipment heterogeneity, security is a fundamental issue, thus it is very important to detect likely intrusion behaviors. Furthermore, since the proportion of labeled data records is small in IoT environment, it is challenging to detect various attacks and intrusions accurately. This investigation builds a semi-supervised ladder network model for intrusion detection in IIoT. This model considers the manifold distribution of high-dimensional data and incorporated a manifold regularization constraint in the decoder of the ladder network. Meanwhile, the feature propagation between layers is strengthened by adding more cross-layer connections in this model. On this basis, a random attention-based data fusion approach to generate global features for intrusion detection. The experiments on CIC-IDS2018 show that the proposed approach can recognize the intrusion with less false alarm rate, whilst model training is time-efficient

    Metaverse-IDS: deep learning-based intrusion detection system for Metaverse-IoT networks

    Get PDF
    Combining the metaverse and the Internet of Things (IoT) will lead to the development of diverse, virtual, and more advanced networks in the future. The integration of IoT networks with the metaverse will enable more meaningful connections between the 'real' and 'virtual' worlds, allowing for real-time data analysis, access, and processing. However, these metaverse-IoT networks will face numerous security and privacy threats. Intrusion Detection Systems (IDS) offer an effective means of early detection for such attacks. Nevertheless, the metaverse generates substantial volumes of data due to its interactive nature and the multitude of user interactions within virtual environments, posing a computational challenge for building an intrusion detection system. To address this challenge, this paper introduces an innovative intrusion detection system model based on deep learning. This model aims to detect most attacks targeting metaverse-IoT communications and combines two techniques: KPCA (Kernel Principal Component Analysis which was used for attack feature extraction and CNN (Convolutional Neural Networks for attack recognition and classification. The efficiency of this proposed IDS model is assessed using two widely recognized benchmark datasets, BoT-IoT and ToN-IoT, which contain various IoT attacks potentially targeting IoT communications. Experimental results confirmed the effectiveness of the proposed IDS model in identifying 12 classes of attacks relevant to metaverse-IoT, achieving a remarkable accuracy of 99.8% and a False Negative Rate FNR less than 0.2. Furthermore, when compared with other models in the literature, our IDS model demonstrates superior performance in attack detection accuracy

    Metaverse-IDS: Deep learning-based intrusion detection system for Metaverse-IoT networks

    Get PDF
    Combining the metaverse and the Internet of Things (IoT) will lead to the development of diverse, virtual, and more advanced networks in the future. The integration of IoT networks with the metaverse will enable more meaningful connections between the 'real' and 'virtual' worlds, allowing for real-time data analysis, access, and processing. However, these metaverse-IoT networks will face numerous security and privacy threats. Intrusion Detection Systems (IDS) offer an effective means of early detection for such attacks. Nevertheless, the metaverse generates substantial volumes of data due to its interactive nature and the multitude of user interactions within virtual environments, posing a computational challenge for building an intrusion detection system. To address this challenge, this paper introduces an innovative intrusion detection system model based on deep learning. This model aims to detect most attacks targeting metaverse-IoT communications and combines two techniques: KPCA (Kernel Principal Component Analysis which was used for attack feature extraction and CNN (Convolutional Neural Networks for attack recognition and classification. The efficiency of this proposed IDS model is assessed using two widely recognized benchmark datasets, BoT-IoT and ToN-IoT, which contain various IoT attacks potentially targeting IoT communications. Experimental results confirmed the effectiveness of the proposed IDS model in identifying 12 classes of attacks relevant to metaverse-IoT, achieving a remarkable accuracy of and a False Negative Rate FNR less than . Furthermore, when compared with other models in the literature, our IDS model demonstrates superior performance in attack detection accuracy

    CLASSIFICATION BASED ON SEMI-SUPERVISED LEARNING: A REVIEW

    Get PDF
    Semi-supervised learning is the class of machine learning that deals with the use of supervised and unsupervised learning to implement the learning process. Conceptually placed between labelled and unlabeled data. In certain cases, it enables the large numbers of unlabeled data required to be utilized in comparison with usually limited collections of labeled data. In standard classification methods in machine learning, only a labeled collection is used to train the classifier. In addition, labelled instances are difficult to acquire since they necessitate the assistance of annotators, who serve in an occupation that is identified by their label. A complete audit without a supervisor is fairly easy to do, but nevertheless represents a significant risk to the enterprise, as there have been few chances to safely experiment with it so far. By utilizing a large number of unsupervised inputs along with the supervised inputs, the semi-supervised learning solves this issue, to create a good training sample. Since semi-supervised learning requires fewer human effort and allows greater precision, both theoretically or in practice, it is of critical interest

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities

    SkipGateNet: A Lightweight CNN-LSTM Hybrid Model with Learnable Skip Connections for Efficient Botnet Attack Detection in IoT

    Get PDF
    The rise of Internet of Things (IoT) has led to increased security risks, particularly from botnet attacks that exploit IoT device vulnerabilities. This situation necessitates effective Intrusion Detection Systems (IDS), that are accurate, lightweight, and fast (having less inference time), designed particularly to detect botnet attacks in resource constrained IoT devices. This paper proposes SkipGateNet, a novel deep learning model designed for detecting Mirai and Bashlite botnet attacks in resource constrained IoT and fog computing environments. SkipGateNet is a lightweight, fast model combining 1D-Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) layers. The novelty of this model lies in the integration of ‘Learnable Skip Connections’. These connections feature gating mechanisms that enhance detection by focusing on relevant features and ignoring irrelevant ones. They add adaptability to the architecture, performing feature selection and propagating only essential features to deeper layers. Tested on the N-BaIoT dataset, SkipGateNet efficiently detects ten types of botnet attacks, with a remarkable test accuracy of 99.91%. It is also compact (2596.87 KB) and demonstrates a quick inference time of 8.0 milliseconds, suitable for real-time implementation in resource-limited settings. While evaluating its performance, parameters like precision, recall, accuracy, and F1 score were considered, along with statistical reliability measures like Cohen’s Kappa Coefficient and Matthews Correlation Coefficient. These highlight its reliability and effectiveness in IoT security challenges. The paper also compares SkipGateNet to existing models and four other deep learning architectures, including two sequential CNN architectures, a simple CNN+LSTM architecture, and a CNN+LSTM with standard skip connections. SkipGateNet surpasses all in accuracy and inference time, demonstrating its superiority in addressing IoT security issues

    Trustworthy Edge Machine Learning: A Survey

    Full text link
    The convergence of Edge Computing (EC) and Machine Learning (ML), known as Edge Machine Learning (EML), has become a highly regarded research area by utilizing distributed network resources to perform joint training and inference in a cooperative manner. However, EML faces various challenges due to resource constraints, heterogeneous network environments, and diverse service requirements of different applications, which together affect the trustworthiness of EML in the eyes of its stakeholders. This survey provides a comprehensive summary of definitions, attributes, frameworks, techniques, and solutions for trustworthy EML. Specifically, we first emphasize the importance of trustworthy EML within the context of Sixth-Generation (6G) networks. We then discuss the necessity of trustworthiness from the perspective of challenges encountered during deployment and real-world application scenarios. Subsequently, we provide a preliminary definition of trustworthy EML and explore its key attributes. Following this, we introduce fundamental frameworks and enabling technologies for trustworthy EML systems, and provide an in-depth literature review of the latest solutions to enhance trustworthiness of EML. Finally, we discuss corresponding research challenges and open issues.Comment: 27 pages, 7 figures, 10 table
    • …
    corecore