243,226 research outputs found
When Attackers Meet AI: Learning-empowered Attacks in Cooperative Spectrum Sensing
Defense strategies have been well studied to combat Byzantine attacks that
aim to disrupt cooperative spectrum sensing by sending falsified versions of
spectrum sensing data to a fusion center. However, existing studies usually
assume network or attackers as passive entities, e.g., assuming the prior
knowledge of attacks is known or fixed. In practice, attackers can actively
adopt arbitrary behaviors and avoid pre-assumed patterns or assumptions used by
defense strategies. In this paper, we revisit this security vulnerability as an
adversarial machine learning problem and propose a novel learning-empowered
attack framework named Learning-Evaluation-Beating (LEB) to mislead the fusion
center. Based on the black-box nature of the fusion center in cooperative
spectrum sensing, our new perspective is to make the adversarial use of machine
learning to construct a surrogate model of the fusion center's decision model.
We propose a generic algorithm to create malicious sensing data using this
surrogate model. Our real-world experiments show that the LEB attack is
effective to beat a wide range of existing defense strategies with an up to 82%
of success ratio. Given the gap between the proposed LEB attack and existing
defenses, we introduce a non-invasive method named as influence-limiting
defense, which can coexist with existing defenses to defend against LEB attack
or other similar attacks. We show that this defense is highly effective and
reduces the overall disruption ratio of LEB attack by up to 80%
Deep reinforcement learning for attacking wireless sensor networks
Recent advances in Deep Reinforcement Learning allow solving increasingly complex problems. In this work, we show how current defense mechanisms in Wireless Sensor Networks are vulnerable to attacks that use these advances. We use a Deep Reinforcement Learning attacker architecture that allows having one or more attacking agents that can learn to attack using only partial observations. Then, we subject our architecture to a test-bench consisting of two defense mechanisms against a distributed spectrum sensing attack and a backoff attack. Our simulations show that our attacker learns to exploit these systems without having a priori information about the defense mechanism used nor its concrete parameters. Since our attacker requires minimal hyper-parameter tuning, scales with the number of attackers, and learns only by interacting with the defense mechanism, it poses a significant threat to current defense procedures
Few-shot Multi-domain Knowledge Rearming for Context-aware Defence against Advanced Persistent Threats
Advanced persistent threats (APTs) have novel features such as multi-stage
penetration, highly-tailored intention, and evasive tactics. APTs defense
requires fusing multi-dimensional Cyber threat intelligence data to identify
attack intentions and conducts efficient knowledge discovery strategies by
data-driven machine learning to recognize entity relationships. However,
data-driven machine learning lacks generalization ability on fresh or unknown
samples, reducing the accuracy and practicality of the defense model. Besides,
the private deployment of these APT defense models on heterogeneous
environments and various network devices requires significant investment in
context awareness (such as known attack entities, continuous network states,
and current security strategies). In this paper, we propose a few-shot
multi-domain knowledge rearming (FMKR) scheme for context-aware defense against
APTs. By completing multiple small tasks that are generated from different
network domains with meta-learning, the FMKR firstly trains a model with good
discrimination and generalization ability for fresh and unknown APT attacks. In
each FMKR task, both threat intelligence and local entities are fused into the
support/query sets in meta-learning to identify possible attack stages.
Secondly, to rearm current security strategies, an finetuning-based deployment
mechanism is proposed to transfer learned knowledge into the student model,
while minimizing the defense cost. Compared to multiple model replacement
strategies, the FMKR provides a faster response to attack behaviors while
consuming less scheduling cost. Based on the feedback from multiple real users
of the Industrial Internet of Things (IIoT) over 2 months, we demonstrate that
the proposed scheme can improve the defense satisfaction rate.Comment: It has been accepted by IEEE SmartNet
- …