14 research outputs found

    5G-SRNG: 5G Spectrogram-based Random Number Generation for Devices with Low Entropy Sources

    Full text link
    Random number generation (RNG) is a crucial element in security protocols, and its performance and reliability are critical for the safety and integrity of digital systems. This is especially true in 5G networks with many devices with low entropy sources. This paper proposes 5G-SRNG, an end-to-end random number generation solution for devices with low entropy sources in 5G networks. Compared to traditional RNG methods, the 5G-SRNG relies on hardware or software random number generators, using 5G spectral information, such as from spectrum-sensing or a spectrum-aware feedback mechanism, as a source of entropy. The proposed algorithm is experimentally verified, and its performance is analysed by simulating a realistic 5G network environment. Results show that 5G-SRNG outperforms existing RNG in all aspects, including randomness, partial correlation and power, making it suitable for 5G network deployments.Comment: 6 Page

    Defensive Distillation-Based Adversarial Attack Mitigation Method for Channel Estimation Using Deep Learning Models in Next-Generation Wireless Networks

    Get PDF
    Future wireless networks (5G and beyond), also known as Next Generation or NextG, are the vision of forthcoming cellular systems, connecting billions of devices and people together. In the last decades, cellular networks have dramatically grown with advanced telecommunication technologies for high-speed data transmission, high cell capacity, and low latency. The main goal of those technologies is to support a wide range of new applications, such as virtual reality, metaverse, telehealth, online education, autonomous and flying vehicles, smart cities, smart grids, advanced manufacturing, and many more. The key motivation of NextG networks is to meet the high demand for those applications by improving and optimizing network functions. Artificial Intelligence (AI) has a high potential to achieve these requirements by being integrated into applications throughout all network layers. However, the security concerns on network functions of NextG using AI-based models, i.e., model poisoning, have not been investigated deeply. It is crucial to protect the next-generation cellular networks against cybersecurity threats, especially adversarial attacks. Therefore, it needs to design efficient mitigation techniques and secure solutions for NextG networks using AI-based methods. This paper proposes a comprehensive vulnerability analysis of deep learning (DL)-based channel estimation models trained with the dataset obtained from MATLAB\u27s 5G toolbox for adversarial attacks and defensive distillation-based mitigation methods. The adversarial attacks produce faulty results by manipulating trained DL-based models for channel estimation in NextG networks while mitigation methods can make models more robust against adversarial attacks. This paper also presents the performance of the proposed defensive distillation mitigation method for each adversarial attack. The results indicate that the proposed mitigation method can defend the DL-based channel estimation models against adversarial attacks in NextG networks

    Security Hardening of Intelligent Reflecting Surfaces Against Adversarial Machine Learning Attacks

    Get PDF
    Next-generation communication networks, also known as NextG or 5G and beyond, are the future data transmission systems that aim to connect a large amount of Internet of Things (IoT) devices, systems, applications, and consumers at high-speed data transmission and low latency. Fortunately, NextG networks can achieve these goals with advanced telecommunication, computing, and Artificial Intelligence (AI) technologies in the last decades and support a wide range of new applications. Among advanced technologies, AI has a significant and unique contribution to achieving these goals for beamforming, channel estimation, and Intelligent Reflecting Surfaces (IRS) applications of 5G and beyond networks. However, the security threats and mitigation for AI-powered applications in NextG networks have not been investigated deeply in academia and industry due to being new and more complicated. This paper focuses on an AI-powered IRS implementation in NextG networks along with its vulnerability against adversarial machine learning attacks. This paper also proposes the defensive distillation mitigation method to defend and improve the robustness of the AI-powered IRS model, i.e., reduce the vulnerability. The results indicate that the defensive distillation mitigation method can significantly improve the robustness of AI-powered models and their performance under an adversarial attack

    Defensive Distillation-based Adversarial Attack Mitigation Method for Channel Estimation using Deep Learning Models in Next-Generation Wireless Networks

    Get PDF
    Future wireless networks (5G and beyond), also known as Next Generation or NextG, are the vision of forthcoming cellular systems, connecting billions of devices and people together. In the last decades, cellular networks have dramatically grown with advanced telecommunication technologies for high-speed data transmission, high cell capacity, and low latency. The main goal of those technologies is to support a wide range of new applications, such as virtual reality, metaverse, telehealth, online education, autonomous and flying vehicles, smart cities, smart grids, advanced manufacturing, and many more. The key motivation of NextG networks is to meet the high demand for those applications by improving and optimizing network functions. Artificial Intelligence (AI) has a high potential to achieve these requirements by being integrated into applications throughout all network layers. However, the security concerns on network functions of NextG using AI-based models, i.e., model poisoning, have not been investigated deeply. It is crucial to protect the next-generation cellular networks against cybersecurity threats, especially adversarial attacks. Therefore, it needs to design efficient mitigation techniques and secure solutions for NextG networks using AI-based methods. This paper proposes a comprehensive vulnerability analysis of deep learning (DL)-based channel estimation models trained with the dataset obtained from MATLAB’s 5G toolbox for adversarial attacks and defensive distillation-based mitigation methods. The adversarial attacks produce faulty results by manipulating trained DL-based models for channel estimation in NextG networks while mitigation methods can make models more robust against adversarial attacks. This paper also presents the performance of the proposed defensive distillation mitigation method for each adversarial attack. The results indicate that the proposed mitigation method can defend the DL-based channel estimation models against adversarial attacks in NextG networks.publishedVersio

    BFV-Based Homomorphic Encryption for Privacy-Preserving CNN Models

    Get PDF
    Medical data is frequently quite sensitive in terms of data privacy and security. Federated learning has been used to increase the privacy and security of medical data, which is a sort of machine learning technique. The training data is disseminated across numerous machines in federated learning, and the learning process is collaborative. There are numerous privacy attacks on deep learning (DL) models that attackers can use to obtain sensitive information. As a result, the DL model should be safeguarded from adversarial attacks, particularly in medical data applications. Homomorphic encryption-based model security from the adversarial collaborator is one of the answers to this challenge. Using homomorphic encryption, this research presents a privacy-preserving federated learning system for medical data. The proposed technique employs a secure multi-party computation protocol to safeguard the deep learning model from adversaries. The proposed approach is tested in terms of model performance using a real-world medical dataset in this paper

    Data Augmentation Based Malware Detection using Convolutional Neural Networks

    Get PDF
    Recently, cyber-attacks have been extensively seen due to the everlasting increase of malware in the cyber world. These attacks cause irreversible damage not only to end-users but also to corporate computer systems. Ransomware attacks such as WannaCry and Petya specifically targets to make critical infrastructures such as airports and rendered operational processes inoperable. Hence, it has attracted increasing attention in terms of volume, versatility, and intricacy. The most important feature of this type of malware is that they change shape as they propagate from one computer to another. Since standard signature-based detection software fails to identify this type of malware because they have different characteristics on each contaminated computer. This paper aims at providing an image augmentation enhanced deep convolutional neural network (CNN) models for the detection of malware families in a metamorphic malware environment. The main contributions of the paper's model structure consist of three components, including image generation from malware samples, image augmentation, and the last one is classifying the malware families by using a convolutional neural network model. In the first component, the collected malware samples are converted binary representation to 3-channel images using windowing technique. The second component of the system create the augmented version of the images, and the last component builds a classification model. In this study, five different deep convolutional neural network model for malware family detection is used.Comment: 18 page

    Cybersecurity and Digital Privacy Aspects of V2X in the EV Charging Structure

    Get PDF
    With the advancement of green energy technology and rising public and political acceptance, electric vehicles (EVs) have grown in popularity. Electric motors, batteries, and charging systems are considered major components of EVs. The electric power infrastructure has been designed to accommodate the needs of EVs, with an emphasis on bidirectional power flow to facilitate power exchange. Furthermore, the communication infrastructure has been enhanced to enable cars to communicate and exchange information with one another, also known as Vehicle-to-Everything (V2X) technology. V2X is positioned to become a bigger and smarter system in the future of transportation, thanks to upcoming digital technologies like Artificial Intelligence (AI), Distributed Ledger Technology, and the Internet of Things. However, like with any technology that includes data collection and sharing, there are issues with digital privacy and cybersecurity. This paper addresses these concerns by creating a multi-layer Cyber-Physical-Social Systems (CPSS) architecture to investigate possible privacy and cybersecurity risks associated with V2X. Using the CPSS paradigm, this research explores the interaction of EV infrastructure as a very critical part of the V2X ecosystem, digital privacy, and cybersecurity concerns

    Robust Ensemble Classier Combination Based on Noise Removal with One-Class SVM

    No full text
    <p>In machine learning area, as the number of labeled input samples becomes very large, it is very dicult to build a classication model because of input data set is not t in a memory in training phase of the algorithm, therefore, it is necessary to utilize data partitioning to handle overall data set. Bagging and boosting based data partitioning methods have been broadly used in data mining and pattern recognition area. Both of these methods have shown a great possibility for improving classication model performance. This study is concerned with the analysis of data set partitioning with noise removal and its impact on the performance of multiple classier models. In this study, we propose noise ltering preprocessing at each data set partition to increment classier model performance. We applied Gini impurity approach to nd the best split percentage of noise lter ratio. The ltered sub data set is then used to train individual ensemble models.</p

    A MapReduce-based distributed SVM algorithm for binary classification

    No full text
    Although the support vector machine (SVM) algorithm has a high generalization property for classifying unseen examples after the training phase and a small loss value, the algorithm is not suitable for real-life classification and regression problems. SVMs cannot solve hundreds of thousands of examples in a training dataset. In previous studies on distributed machine-learning algorithms, the SVM was trained in a costly and preconfigured computer environment. In this research, we present a MapReduce-based distributed parallel SVM training algorithm for binary classification problems. This work shows how to distribute optimization problems over cloud computing systems with the MapReduce technique. In the second step of this work, we used statistical learning theory to find the predictive hypothesis that would minimize the empirical risks from hypothesis spaces that were created with the Reduce function of MapReduce. The results of this research are important for the training of big datasets for SVM algorithm-based classification problems. We provided the iterative training of the split dataset with the MapReduce technique; the accuracy of the classifier function will converge to global optimal classifier function accuracy in finite iteration size. The algorithm performance was measured on samples from letter recognition and pen-based recognition of a handwritten digits dataset
    corecore