2,099 research outputs found

    Attacking Spectrum Sensing With Adversarial Deep Learning in Cognitive Radio-Enabled Internet of Things

    Get PDF
    Cognitive radio-based Internet of Things (CR-IoT) network provides a solution for IoT devices to efficiently utilize spectrum resources. Spectrum sensing is a critical problem in CR-IoT network, which has been investigated extensively based on deep learning (DL). Despite the unique advantages of DL in spectrum sensing, the black-box and unexplained properties of deep neural networks may lead to many security risks. This article considers the fusion of traditional interference methods and data poisoning which is an attack method on the training data of a machine learning tool. We propose a new adversarial attack for reducing the sensing accuracy in DL-based spectrum sensing systems. We introduce a novel design of jamming waveform whose interference capability is reinforced by data poisoning. Simulation results show that significant performance enhancement and higher mobility can be achieved compared with traditional white-box attack methods

    Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors

    Full text link
    Device fingerprinting combined with Machine and Deep Learning (ML/DL) report promising performance when detecting cyberattacks targeting data managed by resource-constrained spectrum sensors. However, the amount of data needed to train models and the privacy concerns of such scenarios limit the applicability of centralized ML/DL-based approaches. Federated learning (FL) addresses these limitations by creating federated and privacy-preserving models. However, FL is vulnerable to malicious participants, and the impact of adversarial attacks on federated models detecting spectrum sensing data falsification (SSDF) attacks on spectrum sensors has not been studied. To address this challenge, the first contribution of this work is the creation of a novel dataset suitable for FL and modeling the behavior (usage of CPU, memory, or file system, among others) of resource-constrained spectrum sensors affected by different SSDF attacks. The second contribution is a pool of experiments analyzing and comparing the robustness of federated models according to i) three families of spectrum sensors, ii) eight SSDF attacks, iii) four scenarios dealing with unsupervised (anomaly detection) and supervised (binary classification) federated models, iv) up to 33% of malicious participants implementing data and model poisoning attacks, and v) four aggregation functions acting as anti-adversarial mechanisms to increase the models robustness

    Attacking Modulation Recognition With Adversarial Federated Learning in Cognitive Radio-Enabled IoT

    Get PDF
    Internet of Things (IoT) based on cognitive radio (CR) exhibits strong dynamic sensing and intelligent decision-making capabilities by effectively utilizing spectrum resources. The federal learning (FL) framework based modulation recognition (MR) is an essential component, but its use of uninterpretable deep learning (DL) introduces security risks. This paper combines traditional signal interference methods and data poisoning in FL to propose a new adversarial attack approach. The poisoning attack in distributed frameworks manipulates the global model by controlling malicious users, which is not only covert but also highly impactful. The carefully designed pseudo-noise in MR is also extremely difficult to detect. The combination of these two techniques can generate a greater security threat. We have further advanced our proposal with the introduction of the new adversarial attack method called "Chaotic Poisoning Attack" to reduce the recognition accuracy of the FL-based MR system. We establish effective attack conditions, and simulation results demonstrate that our method can cause a decrease of approximately 80% in the accuracy of the local model under weak perturbations and a decrease of around 20% in the accuracy of the global model. Compared to white-box attack methods, our method exhibits superior performance and transferability

    Studying the Robustness of Anti-Adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors

    Full text link
    Device fingerprinting combined with Machine and Deep Learning (ML/DL) report promising performance when detecting spectrum sensing data falsification (SSDF) attacks. However, the amount of data needed to train models and the scenario privacy concerns limit the applicability of centralized ML/DL. Federated learning (FL) addresses these drawbacks but is vulnerable to adversarial participants and attacks. The literature has proposed countermeasures, but more effort is required to evaluate the performance of FL detecting SSDF attacks and their robustness against adversaries. Thus, the first contribution of this work is to create an FL-oriented dataset modeling the behavior of resource-constrained spectrum sensors affected by SSDF attacks. The second contribution is a pool of experiments analyzing the robustness of FL models according to i) three families of sensors, ii) eight SSDF attacks, iii) four FL scenarios dealing with anomaly detection and binary classification, iv) up to 33% of participants implementing data and model poisoning attacks, and v) four aggregation functions acting as anti-adversarial mechanisms. In conclusion, FL achieves promising performance when detecting SSDF attacks. Without anti-adversarial mechanisms, FL models are particularly vulnerable with > 16% of adversaries. Coordinate-wise-median is the best mitigation for anomaly detection, but binary classifiers are still affected with > 33% of adversaries

    Spectral Signatures in Backdoor Attacks

    Full text link
    A recent line of work has uncovered a new form of data poisoning: so-called \emph{backdoor} attacks. These attacks are particularly dangerous because they do not affect a network's behavior on typical, benign data. Rather, the network only deviates from its expected output when triggered by a perturbation planted by an adversary. In this paper, we identify a new property of all known backdoor attacks, which we call \emph{spectral signatures}. This property allows us to utilize tools from robust statistics to thwart the attacks. We demonstrate the efficacy of these signatures in detecting and removing poisoned examples on real image sets and state of the art neural network architectures. We believe that understanding spectral signatures is a crucial first step towards designing ML systems secure against such backdoor attacksComment: 16 pages, accepted to NIPS 201
    • …
    corecore