21 research outputs found

    A Self-Attention-Based Deep Convolutional Neural Networks for IIoT Networks Intrusion Detection

    Get PDF
    The Industrial Internet of Things (IIoT) comprises a variety of systems, smart devices, and an extensive range of communication protocols. Hence, these systems face susceptibility to privacy and security challenges, making them prime targets for malicious attacks that can result in harm to the overall system. Privacy breach issues are a notable concern within the realm of IIoT. Various intrusion detection systems based on machine learning (ML) and deep learning (DL) have been introduced to detect malicious activities within these networks and identify attacks. The existing ML and DL-based models face challenges when confronted with highly imbalanced training. Repetitive data in network datasets inflates model performance, as the model has encountered much of the test set data during training. Moreover, these models decrease performance when confronted with datasets that include repetitions of similar data across various classes, where only the class labels are different. To overcome the challenges inherent in existing systems, this paper presents a self-attention-based deep convolutional neural network (SA-DCNN) model designed for monitoring the IIoT networks and detecting malicious activities. Additionally, a two-step cleaning method has been implemented to eliminate redundancy within the training data, considering both intra-class and cross-class samples. The performance of the SA-DCNN model is assessed using IoTID20 and Edge-IIoTset datasets. Furthermore, the proposed study is demonstrated through a comprehensive comparison with other ML and DL models, as well as against relevant studies, showcasing the superior performance and efficacy of the proposed model

    Interactive Effect of Learning Rate and Batch Size to Implement Transfer Learning for Brain Tumor Classification

    Get PDF
    For classifying brain tumors with small datasets, the knowledge-based transfer learning (KBTL) approach has performed very well in attaining an optimized classification model. However, its successful implementation is typically affected by different hyperparameters, specifically the learning rate (LR), batch size (BS), and their joint influence. In general, most of the existing research could not achieve the desired performance because the work addressed only one hyperparameter tuning. This study adopted a Cartesian product matrix-based approach, to interpret the effect of both hyperparameters and their interaction on the performance of models. To evaluate their impact, 56 two-tuple hyperparameters from the Cartesian product matrix were used as inputs to perform an extensive exercise, comprising 504 simulations for three cutting-edge architecture-based pre-trained Deep Learning (DL) models, ResNet18, ResNet50, and ResNet101. Additionally, the impact was also assessed by using three well-known optimizers (solvers): SGDM, Adam, and RMSProp. The performance assessment showed that the framework is an efficient framework to attain optimal values of two important hyperparameters (LR and BS) and consequently an optimized model with an accuracy of 99.56%. Further, our results showed that both hyperparameters have a significant impact individually as well as interactively, with a trade-off in between. Further, the evaluation space was extended by using the statistical ANOVA analysis to validate the main findings. F-test returned with p < 0.05, confirming that both hyperparameters not only have a significant impact on the model performance independently, but that there exists an interaction between the hyperparameters for a combination of their levels

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    An Empirical Study on Customer Churn Behaviours Prediction Using Arabic Twitter Mining Approach

    Get PDF
    With the rising growth of the telecommunication industry, the customer churn problem has grown in significance as well. One of the most critical challenges in the data and voice telecommunication service industry is retaining customers, thus reducing customer churn by increasing customer satisfaction. Telecom companies have depended on historical customer data to measure customer churn. However, historical data does not reveal current customer satisfaction or future likeliness to switch between telecom companies. The related research reveals that many studies have focused on developing churner prediction models based on historical data. These models face delay issues and lack timelines for targeting customers in real-time. In addition, these models lack the ability to tap into Arabic language social media for real-time analysis. As a result, the design of a customer churn model based on real-time analytics is needed. Therefore, this study offers a new approach to using social media mining to predict customer churn in the telecommunication field. This represents the first work using Arabic Twitter mining to predict churn in Saudi Telecom companies. The newly proposed method proved its efficiency based on various standard metrics and based on a comparison with the ground-truth actual outcomes provided by a telecom company

    Modified Equilibrium Optimization Algorithm With Deep Learning-Based DDoS Attack Classification in 5G Networks

    No full text
    5G networks offer high-speed, low-latency communication for various applications. As 5G networks introduce new capabilities and support a wide range of services, they also become more vulnerable to different kinds of cyberattacks, particularly Distributed Denial of Service (DDoS) attacks. Effective DDoS attack classification in 5G networks is a critical aspect of ensuring the security, availability, and performance of these advanced communication infrastructures. In recent days, machine learning (ML) and deep learning (DL) models can be employed for an accurate DDoS attack detection process. In this aspect, this study designs a Modified Equilibrium Optimization Algorithm with Deep Learning based DDoS Attack Classification (MEOADL-ADC) method in 5G networks. The goal of the MEOADL-ADC technique is the automated classification of DDoS attacks in the 5G network. The MEOADL-ADC technique follows a three-stage process such as feature selection, classification, and hyperparameter tuning. Primarily, the MEOADL-ADC technique employs MEOA based feature selection approach. Next, the MEOADL-ADC technique utilizes the long short-term memory (LSTM) model for the classification of DDoS attacks. Finally, the tunicate swarm algorithm (TSA) is exploited to adjust the hyperparameter of the LSTM model. The design of MEOA-based feature selection and TSA-based hyperparameter tuning shows the novelty of the work. The experimental outcome of the MEOADL-ADC method is tested on a benchmark dataset, and the outcomes indicate the betterment of the MEOADL-ADC algorithm over the current methods with maximum accuracy of 97.60&#x0025;

    Interactive Effect of Learning Rate and Batch Size to Implement Transfer Learning for Brain Tumor Classification

    Get PDF
    For classifying brain tumors with small datasets, the knowledge-based transfer learning (KBTL) approach has performed very well in attaining an optimized classification model. However, its successful implementation is typically affected by different hyperparameters, specifically the learning rate (LR), batch size (BS), and their joint influence. In general, most of the existing research could not achieve the desired performance because the work addressed only one hyperparameter tuning. This study adopted a Cartesian product matrix-based approach, to interpret the effect of both hyperparameters and their interaction on the performance of models. To evaluate their impact, 56 two-tuple hyperparameters from the Cartesian product matrix were used as inputs to perform an extensive exercise, comprising 504 simulations for three cutting-edge architecture-based pre-trained Deep Learning (DL) models, ResNet18, ResNet50, and ResNet101. Additionally, the impact was also assessed by using three well-known optimizers (solvers): SGDM, Adam, and RMSProp. The performance assessment showed that the framework is an efficient framework to attain optimal values of two important hyperparameters (LR and BS) and consequently an optimized model with an accuracy of 99.56%. Further, our results showed that both hyperparameters have a significant impact individually as well as interactively, with a trade-off in between. Further, the evaluation space was extended by using the statistical ANOVA analysis to validate the main findings. F-test returned with p < 0.05, confirming that both hyperparameters not only have a significant impact on the model performance independently, but that there exists an interaction between the hyperparameters for a combination of their levels

    Modeling of Botnet Detection Using Barnacles Mating Optimizer with Machine Learning Model for Internet of Things Environment

    No full text
    Owing to the development and expansion of energy-aware sensing devices and autonomous and intelligent systems, the Internet of Things (IoT) has gained remarkable growth and found uses in several day-to-day applications. However, IoT devices are highly prone to botnet attacks. To mitigate this threat, a lightweight and anomaly-based detection mechanism that can create profiles for malicious and normal actions on IoT networks could be developed. Additionally, the massive volume of data generated by IoT gadgets could be analyzed by machine learning (ML) methods. Recently, several deep learning (DL)-related mechanisms have been modeled to detect attacks on the IoT. This article designs a botnet detection model using the barnacles mating optimizer with machine learning (BND-BMOML) for the IoT environment. The presented BND-BMOML model focuses on the identification and recognition of botnets in the IoT environment. To accomplish this, the BND-BMOML model initially follows a data standardization approach. In the presented BND-BMOML model, the BMO algorithm is employed to select a useful set of features. For botnet detection, the BND-BMOML model in this study employs an Elman neural network (ENN) model. Finally, the presented BND-BMOML model uses a chicken swarm optimization (CSO) algorithm for the parameter tuning process, demonstrating the novelty of the work. The BND-BMOML method was experimentally validated using a benchmark dataset and the outcomes indicated significant improvements in performance over existing methods

    Modeling of Botnet Detection Using Barnacles Mating Optimizer with Machine Learning Model for Internet of Things Environment

    No full text
    Owing to the development and expansion of energy-aware sensing devices and autonomous and intelligent systems, the Internet of Things (IoT) has gained remarkable growth and found uses in several day-to-day applications. However, IoT devices are highly prone to botnet attacks. To mitigate this threat, a lightweight and anomaly-based detection mechanism that can create profiles for malicious and normal actions on IoT networks could be developed. Additionally, the massive volume of data generated by IoT gadgets could be analyzed by machine learning (ML) methods. Recently, several deep learning (DL)-related mechanisms have been modeled to detect attacks on the IoT. This article designs a botnet detection model using the barnacles mating optimizer with machine learning (BND-BMOML) for the IoT environment. The presented BND-BMOML model focuses on the identification and recognition of botnets in the IoT environment. To accomplish this, the BND-BMOML model initially follows a data standardization approach. In the presented BND-BMOML model, the BMO algorithm is employed to select a useful set of features. For botnet detection, the BND-BMOML model in this study employs an Elman neural network (ENN) model. Finally, the presented BND-BMOML model uses a chicken swarm optimization (CSO) algorithm for the parameter tuning process, demonstrating the novelty of the work. The BND-BMOML method was experimentally validated using a benchmark dataset and the outcomes indicated significant improvements in performance over existing methods

    Development of a Model for Trust Management in the Social Internet of Things

    No full text
    The Internet of Things (IoT) has evolved at a revolutionary pace in the last two decades of computer science. It is becoming increasingly fashionable for the IoT to be rebranded as the “Social Internet of Things” (SIoT), and this is drawing the attention of the scientific community. Smart items in the Internet of Things (IoT) ecosystem can locate relevant services based on the social ties between neighbors. As a result, SIoT displays the interplay between various items as a problem in the context of the social IoT ecosystem. Navigating a network can be difficult because of the number of friends and the complexity of social ties. By identifying difficulties with standard SIoT devices’ interaction with social objects, truthful friend computing (TFC) is a new paradigm for tracing such difficulties by utilising a relationship management component to improve network navigability. The concept of trust management can be useful as a strategy during collaborations among social IoT nodes. As a result, the trustor can use a variety of measures to evaluate a smart object’s trustworthiness. Hence, this article demonstrates the need for the trustor to evaluate the extent to which a given metric has contributed to the overall trust score and illustrates profitability when engaging in a transaction with other nodes. With the help of the SIoT, this paper used a unified fuzzy-based computational technique and a multiple-criteria decision-making approach to evaluate the trust weights. The statistical findings show that the computing of “truthful friends” is the biggest challenge for successful SIoT implementation at the initial level

    Artificial Intelligence-Based Secure Communication and Classification for Drone-Enabled Emergency Monitoring Systems

    No full text
    Unmanned Aerial Vehicles (UAVs), or drones, provided with camera sensors enable improved situational awareness of several emergency responses and disaster management applications, as they can function from remote and complex accessing regions. The UAVs can be utilized for several application areas which can hold sensitive data, which necessitates secure processing using image encryption approaches. At the same time, UAVs can be embedded in the latest technologies and deep learning (DL) models for disaster monitoring areas such as floods, collapsed buildings, or fires for faster mitigation of its impacts on the environment and human population. This study develops an Artificial Intelligence-based Secure Communication and Classification for Drone-Enabled Emergency Monitoring Systems (AISCC-DE2MS). The proposed AISCC-DE2MS technique majorly employs encryption and classification models for emergency disaster monitoring situations. The AISCC-DE2MS model follows a two-stage process: encryption and image classification. At the initial stage, the AISCC-DE2MS model employs an artificial gorilla troops optimizer (AGTO) algorithm with an ECC-Based ElGamal Encryption technique to accomplish security. For emergency situation classification, the AISCC-DE2MS model encompasses a densely connected network (DenseNet) feature extraction, penguin search optimization (PESO) based hyperparameter tuning, and long short-term memory (LSTM)-based classification. The design of the AGTO-based optimal key generation and PESO-based hyperparameter tuning demonstrate the novelty of our work. The simulation analysis of the AISCC-DE2MS model is tested using the AIDER dataset and the results demonstrate the improved performance of the AISCC-DE2MS model in terms of different measures
    corecore