48 research outputs found
FedDef: Defense Against Gradient Leakage in Federated Learning-based Network Intrusion Detection Systems
Deep learning (DL) methods have been widely applied to anomaly-based network
intrusion detection system (NIDS) to detect malicious traffic. To expand the
usage scenarios of DL-based methods, the federated learning (FL) framework
allows multiple users to train a global model on the basis of respecting
individual data privacy. However, it has not yet been systematically evaluated
how robust FL-based NIDSs are against existing privacy attacks under existing
defenses. To address this issue, we propose two privacy evaluation metrics
designed for FL-based NIDSs, including (1) privacy score that evaluates the
similarity between the original and recovered traffic features using
reconstruction attacks, and (2) evasion rate against NIDSs using Generative
Adversarial Network-based adversarial attack with the reconstructed benign
traffic. We conduct experiments to show that existing defenses provide little
protection that the corresponding adversarial traffic can even evade the SOTA
NIDS Kitsune. To defend against such attacks and build a more robust FL-based
NIDS, we further propose FedDef, a novel optimization-based input perturbation
defense strategy with theoretical guarantee. It achieves both high utility by
minimizing the gradient distance and strong privacy protection by maximizing
the input distance. We experimentally evaluate four existing defenses on four
datasets and show that our defense outperforms all the baselines in terms of
privacy protection with up to 7 times higher privacy score, while maintaining
model accuracy loss within 3% under optimal parameter combination.Comment: 14 pages, 9 figures, submitted to TIF
A Multi-Class Intrusion Detection System Based on Continual Learning
With the proliferation of smart devices, network security has become crucial to protect systems and data. In order to identify and categorise different network threats, this study introduces a flow-based Network Intrusion Detection System (NIDS) based on continual learning with a CNN backbone. Using the LYCOS-IDS2017 dataset, the study explores several continuous learning techniques for identifying threats including denial-of-service and SQL injection. Unlike previous approaches, this work treats intrusion detection as a multi-class classification problem, rather than anomaly detection. The findings show how continuously learning models may identify network intrusions with high recall rates and accuracy while generating few false alarms. This study contributes to the development of an adaptive NIDS that can handle attack classification simultaneously with detection, and that can be trained online without periodic offline training. Additionally, utilising the improved version of the dataset adds value to the research on LYCOS-IDS2017 by presenting results for untested models
Enhancing Cyber-Resiliency of DER-based SmartGrid: A Survey
The rapid development of information and communications technology has
enabled the use of digital-controlled and software-driven distributed energy
resources (DERs) to improve the flexibility and efficiency of power supply, and
support grid operations. However, this evolution also exposes
geographically-dispersed DERs to cyber threats, including hardware and software
vulnerabilities, communication issues, and personnel errors, etc. Therefore,
enhancing the cyber-resiliency of DER-based smart grid - the ability to survive
successful cyber intrusions - is becoming increasingly vital and has garnered
significant attention from both industry and academia. In this survey, we aim
to provide a systematical and comprehensive review regarding the
cyber-resiliency enhancement (CRE) of DER-based smart grid. Firstly, an
integrated threat modeling method is tailored for the hierarchical DER-based
smart grid with special emphasis on vulnerability identification and impact
analysis. Then, the defense-in-depth strategies encompassing prevention,
detection, mitigation, and recovery are comprehensively surveyed,
systematically classified, and rigorously compared. A CRE framework is
subsequently proposed to incorporate the five key resiliency enablers. Finally,
challenges and future directions are discussed in details. The overall aim of
this survey is to demonstrate the development trend of CRE methods and motivate
further efforts to improve the cyber-resiliency of DER-based smart grid.Comment: Submitted to IEEE Transactions on Smart Grid for Publication
Consideratio
Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems
The incremental diffusion of machine learning algorithms in supporting
cybersecurity is creating novel defensive opportunities but also new types of
risks. Multiple researches have shown that machine learning methods are
vulnerable to adversarial attacks that create tiny perturbations aimed at
decreasing the effectiveness of detecting threats. We observe that existing
literature assumes threat models that are inappropriate for realistic
cybersecurity scenarios because they consider opponents with complete knowledge
about the cyber detector or that can freely interact with the target systems.
By focusing on Network Intrusion Detection Systems based on machine learning,
we identify and model the real capabilities and circumstances required by
attackers to carry out feasible and successful adversarial attacks. We then
apply our model to several adversarial attacks proposed in literature and
highlight the limits and merits that can result in actual adversarial attacks.
The contributions of this paper can help hardening defensive systems by letting
cyber defenders address the most critical and real issues, and can benefit
researchers by allowing them to devise novel forms of adversarial attacks based
on realistic threat models
The Threat of Offensive AI to Organizations
AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can use AI to enhance their attacks and expand their campaigns.
Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations. For example, how does an AI-capable adversary impact the cyber kill chain? Does AI benefit the attacker more than the defender? What are the most significant AI threats facing organizations today and what will be their impact on the future?
In this study, we explore the threat of offensive AI on organizations. First, we present the background and discuss how AI changes the adversary’s methods, strategies, goals, and overall attack model. Then, through a literature review, we identify 32 offensive AI capabilities which adversaries can use to enhance their attacks. Finally, through a panel survey spanning industry, government and academia, we rank the AI threats and provide insights on the adversaries
A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
The generation of feasible adversarial examples is necessary for properly
assessing models that work in constrained feature space. However, it remains a
challenging task to enforce constraints into attacks that were designed for
computer vision. We propose a unified framework to generate feasible
adversarial examples that satisfy given domain constraints. Our framework can
handle both linear and non-linear constraints. We instantiate our framework
into two algorithms: a gradient-based attack that introduces constraints in the
loss function to maximize, and a multi-objective search algorithm that aims for
misclassification, perturbation minimization, and constraint satisfaction. We
show that our approach is effective in four different domains, with a success
rate of up to 100%, where state-of-the-art attacks fail to generate a single
feasible example. In addition to adversarial retraining, we propose to
introduce engineered non-convex constraints to improve model adversarial
robustness. We demonstrate that this new defense is as effective as adversarial
retraining. Our framework forms the starting point for research on constrained
adversarial attacks and provides relevant baselines and datasets that future
research can exploit