21 research outputs found

    Uncertainty-Aware Prediction Validator in Deep Learning Models for Cyber-Physical System Data

    Get PDF
    The use of Deep learning in Cyber-Physical Systems (CPSs) is gaining popularity due to its ability to bring intelligence to CPS behaviors. However, both CPSs and deep learning have inherent uncertainty. Such uncertainty, if not handled adequately, can lead to unsafe CPS behavior. The first step toward addressing such uncertainty in deep learning is to quantify uncertainty. Hence, we propose a novel method called NIRVANA (uNcertaInty pRediction ValidAtor iN Ai) for prediction validation based on uncertainty metrics. To this end, we first employ prediction-time Dropout-based Neural Networks to quantify uncertainty in deep learning models applied to CPS data. Second, such quantified uncertainty is taken as the input to predict wrong labels using a support vector machine, with the aim of building a highly discriminating prediction validator model with uncertainty values. In addition, we investigated the relationship between uncertainty quantification and prediction performance and conducted experiments to obtain optimal dropout ratios. We conducted all the experiments with four real-world CPS datasets. Results show that uncertainty quantification is negatively correlated to prediction performance of a deep learning model of CPS data. Also, our dropout ratio adjustment approach is effective in reducing uncertainty of correct predictions while increasing uncertainty of wrong predictions.publishedVersio

    Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

    Get PDF
    Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.publishedVersio

    A Practical Implementation of Medical Privacy-Preserving Federated Learning Using Multi-Key Homomorphic Encryption and Flower Framework

    Get PDF
    The digitization of healthcare data has presented a pressing need to address privacy concerns within the realm of machine learning for healthcare institutions. One promising solution is federated learning, which enables collaborative training of deep machine learning models among medical institutions by sharing model parameters instead of raw data. This study focuses on enhancing an existing privacy-preserving federated learning algorithm for medical data through the utilization of homomorphic encryption, building upon prior research. In contrast to the previous paper, this work is based upon Wibawa, using a single key for HE, our proposed solution is a practical implementation of a preprint with a proposed encryption scheme (xMK-CKKS) for implementing multi-key homomorphic encryption. For this, our work first involves modifying a simple “ring learning with error” RLWE scheme. We then fork a popular federated learning framework for Python where we integrate our own communication process with protocol buffers before we locate and modify the library’s existing training loop in order to further enhance the security of model updates with the multi-key homomorphic encryption scheme. Our experimental evaluations validate that, despite these modifications, our proposed framework maintains a robust model performance, as demonstrated by consistent metrics including validation accuracy, precision, f1-score, and recall.publishedVersio

    Security Hardening of Intelligent Reflecting Surfaces Against Adversarial Machine Learning Attacks

    Get PDF
    Next-generation communication networks, also known as NextG or 5G and beyond, are the future data transmission systems that aim to connect a large amount of Internet of Things (IoT) devices, systems, applications, and consumers at high-speed data transmission and low latency. Fortunately, NextG networks can achieve these goals with advanced telecommunication, computing, and Artificial Intelligence (AI) technologies in the last decades and support a wide range of new applications. Among advanced technologies, AI has a significant and unique contribution to achieving these goals for beamforming, channel estimation, and Intelligent Reflecting Surfaces (IRS) applications of 5G and beyond networks. However, the security threats and mitigation for AI-powered applications in NextG networks have not been investigated deeply in academia and industry due to being new and more complicated. This paper focuses on an AI-powered IRS implementation in NextG networks along with its vulnerability against adversarial machine learning attacks. This paper also proposes the defensive distillation mitigation method to defend and improve the robustness of the AI-powered IRS model, i.e., reduce the vulnerability. The results indicate that the defensive distillation mitigation method can significantly improve the robustness of AI-powered models and their performance under an adversarial attack.publishedVersio

    Defensive Distillation-based Adversarial Attack Mitigation Method for Channel Estimation using Deep Learning Models in Next-Generation Wireless Networks

    Get PDF
    Future wireless networks (5G and beyond), also known as Next Generation or NextG, are the vision of forthcoming cellular systems, connecting billions of devices and people together. In the last decades, cellular networks have dramatically grown with advanced telecommunication technologies for high-speed data transmission, high cell capacity, and low latency. The main goal of those technologies is to support a wide range of new applications, such as virtual reality, metaverse, telehealth, online education, autonomous and flying vehicles, smart cities, smart grids, advanced manufacturing, and many more. The key motivation of NextG networks is to meet the high demand for those applications by improving and optimizing network functions. Artificial Intelligence (AI) has a high potential to achieve these requirements by being integrated into applications throughout all network layers. However, the security concerns on network functions of NextG using AI-based models, i.e., model poisoning, have not been investigated deeply. It is crucial to protect the next-generation cellular networks against cybersecurity threats, especially adversarial attacks. Therefore, it needs to design efficient mitigation techniques and secure solutions for NextG networks using AI-based methods. This paper proposes a comprehensive vulnerability analysis of deep learning (DL)-based channel estimation models trained with the dataset obtained from MATLAB’s 5G toolbox for adversarial attacks and defensive distillation-based mitigation methods. The adversarial attacks produce faulty results by manipulating trained DL-based models for channel estimation in NextG networks while mitigation methods can make models more robust against adversarial attacks. This paper also presents the performance of the proposed defensive distillation mitigation method for each adversarial attack. The results indicate that the proposed mitigation method can defend the DL-based channel estimation models against adversarial attacks in NextG networks.publishedVersio

    Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty

    Get PDF
    Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model’s final probability outputs, along with the model’s own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model’s decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.publishedVersio

    On the Performance of Energy Criterion Method in Wi-Fi Transient Signal Detection

    Get PDF
    In the development of radiofrequency fingerprinting (RFF), one of the major challenges is to extract subtle and robust features from transmitted signals of wireless devices to be used in accurate identification of possible threats to the wireless network. To overcome this challenge, the use of the transient region of the transmitted signals could be one of the best options. For an efficient transient-based RFF, it is also necessary to accurately and precisely estimate the transient region of the signal. Here, the most important difficulty can be attributed to the detection of the transient starting point. Thus, several methods have been developed to detect transient start in the literature. Among them, the energy criterion method based on the instantaneous amplitude characteristics (EC-a) was shown to be superior in a recent study. The study reported the performance of the EC-a method for a set of Wi-Fi signals captured from a particular Wi-Fi device brand. However, since the transient pattern varies according to the type of wireless device, the device diversity needs to be increased to achieve more reliable results. Therefore, this study is aimed at assessing the efficiency of the EC-a method across a large set of Wi-Fi signals captured from various Wi-Fi devices for the first time. To this end, Wi-Fi signals are first captured from smartphones of five brands, for a wide range of signal-to-noise ratio (SNR) values defined as low (−3 to 5 dB), medium (5 to 15 dB), and high (15 to 30 dB). Then, the performance of the EC-a method and well-known methods was comparatively assessed, and the efficiency of the EC-a method was verified in terms of detection accuracy.publishedVersio

    Deployment and Implementation Aspects of Radio Frequency Fingerprinting in Cybersecurity of Smart Grids

    Get PDF
    Smart grids incorporate diverse power equipment used for energy optimization in intelligent cities. This equipment may use Internet of Things (IoT) devices and services in the future. To ensure stable operation of smart grids, cybersecurity of IoT is paramount. To this end, use of cryptographic security methods is prevalent in existing IoT. Non-cryptographic methods such as radio frequency fingerprinting (RFF) have been on the horizon for a few decades but are limited to academic research or military interest. RFF is a physical layer security feature that leverages hardware impairments in radios of IoT devices for classification and rogue device detection. The article discusses the potential of RFF in wireless communication of IoT devices to augment the cybersecurity of smart grids. The characteristics of a deep learning (DL)-aided RFF system are presented. Subsequently, a deployment framework of RFF for smart grids is presented with implementation and regulatory aspects. The article culminates with a discussion of existing challenges and potential research directions for maturation of RFF.publishedVersio

    Towards robust autonomous driving systems through adversarial test set generation

    Get PDF
    Correct environmental perception of objects on the road is vital for the safety of autonomous driving. Making appropriate decisions by the autonomous driving algorithm could be hindered by data perturbations and more recently, by adversarial attacks. We propose an adversarial test input generation approach based on uncertainty to make the machine learning (ML) model more robust against data perturbations and adversarial attacks. Adversarial attacks and uncertain inputs can affect the ML model’s performance, which can have severe consequences such as the misclassification of objects on the road by autonomous vehicles, leading to incorrect decision-making. We show that we can obtain more robust ML models for autonomous driving by making a dataset that includes highly-uncertain adversarial test inputs during the re-training phase. We demonstrate an improvement in the accuracy of the robust model by more than 12%, with a notable drop in the uncertainty of the decisions returned by the model. We believe our approach will assist in further developing risk-aware autonomous systems.acceptedVersio
    corecore