21 research outputs found

    A survey on MAC-based physical layer security over wireless sensor network

    Get PDF
    Physical layer security for wireless sensor networks (WSNs) is a laborious and highly critical issue in the world. Wireless sensor networks have great importance in civil and military fields or applications. Security of data/information through wireless medium remains a challenge. The data that we transmit wirelessly has increased the speed of transmission rate. In physical layer security, the data transfer between source and destination is not confidential, and thus the user has privacy issues, which is why improving the security of wireless sensor networks is a prime concern. The loss of physical security causes a great threat to a network. We have various techniques to resolve these issues, such as interference, noise, fading in the communications, etc. In this paper we have surveyed the different parameters of a security design model to highlight the vulnerabilities. Further we have discussed the various attacks on different layers of the TCP/IP model along with their mitigation techniques. We also elaborated on the applications of WSNs in healthcare, military information integration, oil and gas. Finally, we have proposed a solution to enhance the security of WSNs by adopting the alpha method and handshake mechanism with encryption and decryption

    Boundaries and Future Trends of ChatGPT Based on AI and Security Perspectives

    Get PDF
    In decades, technology and artificial intelligence have significantly impacted aspects of life. One noteworthy development is ChatGPT, an AI-based model that has created a revolution and attracted attention from researchers, academia, and organizations in a short period of time. Experts predict that ChatGPT will continue advancing, bringing about a leap in artificial intelligence. It is believed that this technology holds the potential to address cybersecurity concerns, protect against threats and attacks, and overcome challenges associated with our increasing reliance on technology and the internet. This technology may change our lives in productive and helpful ways, from the interaction with other AI technologies to the potential for enhanced personalization and customization to the continuing improvement of language model performance. While these new developments have the potential to enhance our lives, it is our responsibility as a society to thoroughly examine and confront the ethical and societal impacts. This research delves into the state of ChatGPT and its developments in the fields of artificial intelligence and security. It also explores the challenges faced by ChatGPT regarding privacy, data security, and potential misuse. Furthermore, it highlights emerging trends that could influence the direction of ChatGPT's progress. This paper also offers insights into the implications of using ChatGPT in security contexts. Provides recommendations for addressing these issues. The goal is to leverage the capabilities of AI-powered conversational systems while mitigating any risks.   Doi: 10.28991/HIJ-2024-05-01-010 Full Text: PD

    A Proactive Explainable Artificial Neural Network Model for the Early Diagnosis of Thyroid Cancer

    No full text
    Early diagnosis of thyroid cancer can reduce mortality, and can decrease the risk of recurrence, side effects, or the need for lengthy surgery. In this study, an explainable artificial neural network (EANN) model was developed to distinguish between malignant and benign nodules and to understand the factors that are predictive of malignancy. The study was conducted using the records of 724 patients who were admitted to Shengjing Hospital of China Medical University. The dataset contained the patients’ demographic information, nodule characteristics, blood test findings, and thyroid characteristics. The performance of the model was evaluated using the metrics of accuracy, sensitivity, specificity, F1 score, and area under the curve (AUC). The SMOTEENN combined sampling method was used to correct for a significant imbalance between malignant and benign nodules in the dataset. The proposed model outperformed a baseline study, with an accuracy of 0.99 and an AUC of 0.99. The proposed EANN model can assist health care professionals by enabling them to make effective early cancer diagnoses

    An Anomaly Detection Model for Oil and Gas Pipelines Using Machine Learning

    No full text
    Detection of minor leaks in oil or gas pipelines is a critical and persistent problem in the oil and gas industry. Many organisations have long relied on fixed hardware or manual assessments to monitor leaks. With rapid industrialisation and technological advancements, innovative engineering technologies that are cost-effective, faster, and easier to implement are essential. Herein, machine learning-based anomaly detection models are proposed to solve the problem of oil and gas pipeline leakage. Five machine learning algorithms, namely, random forest, support vector machine, k-nearest neighbour, gradient boosting, and decision tree, were used to develop detection models for pipeline leaks. The support vector machine algorithm, with an accuracy of 97.4%, overperformed the other algorithms in detecting pipeline leakage and thus proved its efficiency as an accurate model for detecting leakage in oil and gas pipelines

    Modified Red Fox Optimizer With Deep Learning Enabled False Data Injection Attack Detection

    No full text
    Recently, power systems are drastically developed and shifted towards cyber-physical power systems (CPPS). The CPPS involve numerous sensor devices which generates enormous quantities of information. The data gathered from each sensing component needs to accomplish to reliability which are highly prone to attacks. Amongst various kinds of attacks, false data injection attack (FDIA) can seriously affects energy efficiency of CPPS. Current data driven approach utilized for designing FDIA frequently depends on distinct environmental and assumption conditions making them unrealistic and ineffective. In this paper, we present a modified Red Fox Optimizer with Deep Learning enabled FDIA detection (MRFODL-FDIAD) in the CPPS environment. The presented MRFODL-FDIAD technique mainly detects and classifies FDIAs in the CPPS environment. It encompasses a three stage process namely pre-processing, detection, and parameter tuning. For FDIA detection, the MRFODL-FDIAD technique uses multihead attention-based long short term memory (MBALSTM) technique. To improve the detection performance of the MBALSTM model, the MRFO technique can be exploited in this study. The experimental evaluation of the MRFODL-FDIAD approach was performed on standard IEEE bus system. Extensive set of experimentations highlighted the supremacy of the MRFODL-FDIAD technique

    Machine Learning-Based Model to Predict the Disease Severity and Outcome in COVID-19 Patients

    No full text
    The novel coronavirus (COVID-19) outbreak produced devastating effects on the global economy and the health of entire communities. Although the COVID-19 survival rate is high, the number of severe cases that result in death is increasing daily. A timely prediction of at-risk patients of COVID-19 with precautionary measures is expected to increase the survival rate of patients and reduce the fatality rate. This research provides a prediction method for the early identification of COVID-19 patient’s outcome based on patients’ characteristics monitored at home, while in quarantine. The study was performed using 287 COVID-19 samples of patients from the King Fahad University Hospital, Saudi Arabia. The data were analyzed using three classification algorithms, namely, logistic regression (LR), random forest (RF), and extreme gradient boosting (XGB). Initially, the data were preprocessed using several preprocessing techniques. Furthermore, 10-k cross-validation was applied for data partitioning and SMOTE for alleviating the data imbalance. Experiments were performed using twenty clinical features, identified as significant for predicting the survival versus the deceased COVID-19 patients. The results showed that RF outperformed the other classifiers with an accuracy of 0.95 and area under curve (AUC) of 0.99. The proposed model can assist the decision-making and health care professional by early identification of at-risk COVID-19 patients effectively

    Modified Equilibrium Optimization Algorithm With Deep Learning-Based DDoS Attack Classification in 5G Networks

    No full text
    5G networks offer high-speed, low-latency communication for various applications. As 5G networks introduce new capabilities and support a wide range of services, they also become more vulnerable to different kinds of cyberattacks, particularly Distributed Denial of Service (DDoS) attacks. Effective DDoS attack classification in 5G networks is a critical aspect of ensuring the security, availability, and performance of these advanced communication infrastructures. In recent days, machine learning (ML) and deep learning (DL) models can be employed for an accurate DDoS attack detection process. In this aspect, this study designs a Modified Equilibrium Optimization Algorithm with Deep Learning based DDoS Attack Classification (MEOADL-ADC) method in 5G networks. The goal of the MEOADL-ADC technique is the automated classification of DDoS attacks in the 5G network. The MEOADL-ADC technique follows a three-stage process such as feature selection, classification, and hyperparameter tuning. Primarily, the MEOADL-ADC technique employs MEOA based feature selection approach. Next, the MEOADL-ADC technique utilizes the long short-term memory (LSTM) model for the classification of DDoS attacks. Finally, the tunicate swarm algorithm (TSA) is exploited to adjust the hyperparameter of the LSTM model. The design of MEOA-based feature selection and TSA-based hyperparameter tuning shows the novelty of the work. The experimental outcome of the MEOADL-ADC method is tested on a benchmark dataset, and the outcomes indicate the betterment of the MEOADL-ADC algorithm over the current methods with maximum accuracy of 97.60%

    Chaotic image encryption algorithm with improved bonobo optimizer and DNA coding for enhanced security

    No full text
    Image encryption involves applying cryptographic approaches to convert the content of an image into an illegible or encrypted format, reassuring that illegal users cannot simply interpret or access the actual visual details. Commonly employed models comprise symmetric key algorithms for the encryption of the image data, necessitating a secret key for decryption. This study introduces a new Chaotic Image Encryption Algorithm with an Improved Bonobo Optimizer and DNA Coding (CIEAIBO-DNAC) for enhanced security. The presented CIEAIBO-DNAC technique involves different processes such as initial value generation, substitution, diffusion, and decryption. Primarily, the key is related to the input image pixel values by the MD5 hash function, and the hash value produced by the input image can be utilized as a primary value of the chaotic model to boost key sensitivity. Besides, the CIEAIBO-DNAC technique uses the Improved Bonobo Optimizer (IBO) algorithm for scrambling the pixel position in the block and the scrambling process among the blocks takes place. Moreover, in the diffusion stage, DNA encoding, obfuscation, and decoding process were carried out to attain encrypted images. Extensive experimental evaluations and security analyses are conducted to assess the outcome of the CIEAIBO-DNAC technique. The simulation outcome demonstrates excellent security properties, including resistance against several attacks, ensuring it can be applied to real-time image encryption scenarios

    Implementation of a Clustering-Based LDDoS Detection Method

    No full text
    With the rapid advancement and transformation of technology, information and communication technologies (ICT), in particular, have attracted everyone’s attention. The attackers took advantage of this and can caused serious problems, such as malware attack, ransomware, SQL injection attack, etc. One of the dominant attacks, known as distributed denial-of-service (DDoS), has been observed as the main reason for information hacking. In this paper, we have proposed a secure technique, called the low-rate distributed denial-of-service (LDDoS) technique, to measure attack penetration and secure communication flow. A two-step clustering method was adopted, where the network traffic was controlled by using the characteristics of TCP traffic with discrete sense; then, the suspicious cluster with the abnormal analysis was detected. This method has proven to be reliable and efficient for LDDoS attacks detection, based on the NS-2 simulator, compared to the exponentially weighted moving average (EWMA) technique, which has comparatively very high false-positive rates. Analyzing abnormal test pieces helps us reduce the false positives. The proposed methodology was implemented using Python for scripting and NS-2 simulator for topology, two public trademark datasets, i.e., Web of Information for Development (WIDE) and Lawrence Berkley National Laboratory (LBNL), were selected for experiments. The experiments were analyzed, and the results evaluated using Wireshark. The proposed LDDoS approach achieved good results, compared to the previous techniques

    Modeling of Botnet Detection Using Chaotic Binary Pelican Optimization Algorithm With Deep Learning on Internet of Things Environment

    No full text
    Nowadays, there are ample amounts of Internet of Things (IoT) devices interconnected to the networks, and with technological improvement, cyberattacks and security threads, for example, botnets, are rapidly evolving and emerging with high-risk attacks. A botnet is a network of compromised devices that are controlled by cyber attackers, frequently employed to perform different cyberattacks. Such attack disrupts IoT evolution by disrupting services and networks for IoT devices. Detecting botnets in an IoT environment includes finding abnormal patterns or behaviors that might indicate the existence of these malicious networks. Several researchers have proposed deep learning (DL) and machine learning (ML) approaches for identifying and categorizing botnet attacks in the IoT platform. Therefore, this study introduces a Botnet Detection using the Chaotic Binary Pelican Optimization Algorithm with Deep Learning (BNT-CBPOADL) technique in the IoT environment. The main aim of the BNT-CBPOADL method lies in the correct detection and categorization of botnet attacks in the IoT environment. In the BNT-CBPOADL method, Z-score normalization is applied for pre-processing. Besides, the CBPOA technique is derived for feature selection. The convolutional variational autoencoder (CVAE) method is applied for botnet detection. At last, the arithmetical optimization algorithm (AOA) is employed for the optimal hyperparameter tuning of the CVAE algorithm. The experimental valuation of the BNT-CBPOADL technique is tested on a Bot-IoT database. The experimentation outcomes inferred the supremacy of the BNT-CBPOADL method over other existing techniques with maximum accuracy of 99.50%
    corecore