54 research outputs found
Towards an Uncertainty-Aware Adaptive Decision Engine for Self-Protecting Software: an POMDP-based Approach
The threats posed by evolving cyberattacks have led to increased research
related to software systems that can self-protect. One topic in this domain is
Moving Target Defense (MTD), which changes software characteristics in the
protected system to make it harder for attackers to exploit vulnerabilities.
However, MTD implementation and deployment are often impacted by run-time
uncertainties, and existing MTD decision-making solutions have neglected
uncertainty in model parameters and lack self-adaptation. This paper aims to
address this gap by proposing an approach for an uncertainty-aware and
self-adaptive MTD decision engine based on Partially Observable Markov Decision
Process and Bayesian Learning techniques. The proposed approach considers
uncertainty in both state and model parameters; thus, it has the potential to
better capture environmental variability and improve defense strategies. A
preliminary study is presented to highlight the potential effectiveness and
challenges of the proposed approach
QFlip: An Adaptive Reinforcement Learning Strategy for the FlipIt Security Game
A rise in Advanced Persistent Threats (APTs) has introduced a need for
robustness against long-running, stealthy attacks which circumvent existing
cryptographic security guarantees. FlipIt is a security game that models
attacker-defender interactions in advanced scenarios such as APTs. Previous
work analyzed extensively non-adaptive strategies in FlipIt, but adaptive
strategies rise naturally in practical interactions as players receive feedback
during the game. We model the FlipIt game as a Markov Decision Process and
introduce QFlip, an adaptive strategy for FlipIt based on temporal difference
reinforcement learning. We prove theoretical results on the convergence of our
new strategy against an opponent playing with a Periodic strategy. We confirm
our analysis experimentally by extensive evaluation of QFlip against specific
opponents. QFlip converges to the optimal adaptive strategy for Periodic and
Exponential opponents using associated state spaces. Finally, we introduce a
generalized QFlip strategy with composite state space that outperforms a Greedy
strategy for several distributions including Periodic and Uniform, without
prior knowledge of the opponent's strategy. We also release an OpenAI Gym
environment for FlipIt to facilitate future research.Comment: Outstanding Student Paper awar
Research Priorities for Robust and Beneficial Artificial Intelligence
Success in the quest for artificial intelligence has the potential to bring
unprecedented benefits to humanity, and it is therefore worthwhile to
investigate how to maximize these benefits while avoiding potential pitfalls.
This article gives numerous examples (which should by no means be construed as
an exhaustive list) of such worthwhile research aimed at ensuring that AI
remains robust and beneficial.Comment: This article gives examples of the type of research advocated by the
open letter for robust & beneficial AI at
http://futureoflife.org/ai-open-lette
Game Theory in Distributed Systems Security: Foundations, Challenges, and Future Directions
Many of our critical infrastructure systems and personal computing systems
have a distributed computing systems structure. The incentives to attack them
have been growing rapidly as has their attack surface due to increasing levels
of connectedness. Therefore, we feel it is time to bring in rigorous reasoning
to secure such systems. The distributed system security and the game theory
technical communities can come together to effectively address this challenge.
In this article, we lay out the foundations from each that we can build upon to
achieve our goals. Next, we describe a set of research challenges for the
community, organized into three categories -- analytical, systems, and
integration challenges, each with "short term" time horizon (2-3 years) and
"long term" (5-10 years) items. This article was conceived of through a
community discussion at the 2022 NSF SaTC PI meeting.Comment: 11 pages in IEEE Computer Society magazine format, including
references and author bios. There is 1 figur
Intrusion Prevention through Optimal Stopping
We study automated intrusion prevention using reinforcement learning.
Following a novel approach, we formulate the problem of intrusion prevention as
an (optimal) multiple stopping problem. This formulation gives us insight into
the structure of optimal policies, which we show to have threshold properties.
For most practical cases, it is not feasible to obtain an optimal defender
policy using dynamic programming. We therefore develop a reinforcement learning
approach to approximate an optimal threshold policy. We introduce T-SPSA, an
efficient reinforcement learning algorithm that learns threshold policies
through stochastic approximation. We show that T-SPSA outperforms
state-of-the-art algorithms for our use case. Our overall method for learning
and validating policies includes two systems: a simulation system where
defender policies are incrementally learned and an emulation system where
statistics are produced that drive simulation runs and where learned policies
are evaluated. We show that this approach can produce effective defender
policies for a practical IT infrastructure.Comment: Preprint; Submitted to IEEE for review. major revision 1/4 2022.
arXiv admin note: substantial text overlap with arXiv:2106.0716
TESTING DECEPTION WITH A COMMERCIAL TOOL SIMULATING CYBERSPACE
Deception methods have been applied to the traditional domains of war (air, land, sea, and space). In the newest domain of cyber, deception can be studied to see how it can be best used. Cyberspace operations are an essential warfighting domain within the Department of Defense (DOD). Many training exercises and courses have been developed to aid leadership with planning and to execute cyberspace effects that support operations. However, only a few simulations train cyber operators about how to respond to cyberspace threats. This work tested a commercial product from Soar Technologies (Soar Tech) that simulates conflict in cyberspace. The Cyberspace Course of Action Tool (CCAT) is a decision-support tool that evaluates defensive deception in a wargame simulating a local-area network being attacked. Results showed that defensive deception methods of decoys and bait could be effective in cyberspace. This could help military cyber defenses since their digital infrastructure is threatened daily with cyberattacks.Marine Forces Cyberspace CommandChief Petty Officer, United States NavyChief Petty Officer, United States NavyApproved for public release. Distribution is unlimited
Network Intrusion Detection System:A systematic study of Machine Learning and Deep Learning approaches
The rapid advances in the internet and communication fields have resulted in ahuge increase in the network size and the corresponding data. As a result, manynovel attacks are being generated and have posed challenges for network secu-rity to accurately detect intrusions. Furthermore, the presence of the intruderswiththeaimtolaunchvariousattackswithinthenetworkcannotbeignored.Anintrusion detection system (IDS) is one such tool that prevents the network frompossible intrusions by inspecting the network traffic, to ensure its confidential-ity, integrity, and availability. Despite enormous efforts by the researchers, IDSstillfaceschallengesinimprovingdetectionaccuracywhilereducingfalsealarmrates and in detecting novel intrusions. Recently, machine learning (ML) anddeep learning (DL)-based IDS systems are being deployed as potential solutionsto detect intrusions across the network in an efficient manner. This article firstclarifiestheconceptofIDSandthenprovidesthetaxonomybasedonthenotableML and DL techniques adopted in designing network-based IDS (NIDS) sys-tems. A comprehensive review of the recent NIDS-based articles is provided bydiscussing the strengths and limitations of the proposed solutions. Then, recenttrends and advancements of ML and DL-based NIDS are provided in terms ofthe proposed methodology, evaluation metrics, and dataset selection. Using theshortcomings of the proposed methods, we highlighted various research chal-lenges and provided the future scope for the research in improving ML andDL-based NIDS
Federated learning for distributed intrusion detection systems in public networks
Abstract. The rapid integration of technologies such as IoT devices, cloud, and edge computing has led to a progressively interconnected network of intelligent environments, services, and public infrastructures. This evolution highlights the critical need for sophisticated and self-governing Intrusion Detection Systems (IDS) to enhance trust and ensure the security and integrity of these interconnected environments. Furthermore, the advancement of AI-based Intrusion Detection Systems hinges on the effective utilization of high-quality data for model training. A considerable number of datasets created in controlled lab environments have recently been released, which has significantly facilitated researchers in developing and evaluating resilient Machine Learning models. However, a substantial portion of the architectures and datasets available are now considered outdated. As a result, the principal aim of this thesis is to contribute to the enhancement of knowledge concerning the creation of contemporary testbed architectures specifically designed for defense systems. The main objective of this study is to propose an innovative testbed infrastructure design, capitalizing on the broad connectivity panOULU public network, to facilitate the analysis and evaluation of AI-based security applications within a public network setting. The testbed incorporates a variety of distributed computing paradigms including edge, fog, and cloud computing. It simplifies the adoption of technologies like Software-Defined Networking, Network Function Virtualization, and Service Orchestration by leveraging the capabilities of the VMware vSphere platform. In the learning phase, a custom-developed application uses information from the attackers to automatically classify incoming data as either normal or malicious. This labeled data is then used for training machine learning models within a federated learning framework (FED-ML). The trained models are validated using previously unseen network data (test data). The entire procedure, from collecting network traffic to labeling data, and from training models within the federated architecture, operates autonomously, removing the necessity for human involvement. The development and implementation of FED-ML models in this thesis may contribute towards laying the groundwork for future-forward, AI-oriented cybersecurity measures. The dataset and testbed configuration showcased in this research could improve our understanding of the challenges associated with safeguarding public networks, especially those with heterogeneous environments comprising various technologies
A Machine Learning Approach for Intrusion Detection
Master's thesis in Information- and communication technology (IKT590)Securing networks and their confidentiality from intrusions is crucial, and for this rea-son, Intrusion Detection Systems have to be employed. The main goal of this thesis is to achieve a proper detection performance of a Network Intrusion Detection System (NIDS). In this thesis, we have examined the detection efficiency of machine learning algorithms such as Neural Network, Convolutional Neural Network, Random Forestand Long Short-Term Memory. We have constructed our models so that they can detect different types of attacks utilizing the CICIDS2017 dataset. We have worked on identifying 15 various attacks present in CICIDS2017, instead of merely identifying normal-abnormal traffic. We have also discussed the reason why to use precisely this dataset, and why should one classify by attack to enhance the detection. Previous works based on benchmark datasets such as NSL-KDD and KDD99 are discussed. Also, how to address and solve these issues. The thesis also shows how the results are effected using different machine learning algorithms. As the research will demon-strate, the Neural Network, Convulotional Neural Network, Random Forest and Long Short-Term Memory are evaluated by conducting cross validation; the average score across five folds of each model is at 92.30%, 87.73%, 94.42% and 87.94%, respectively. Nevertheless, the confusion metrics was also a crucial measurement to evaluate the models, as we shall see. Keywords: Information security, NIDS, Machine Learning, Neural Network, Convolutional Neural Network, Random Forest, Long Short-Term Memory, CICIDS2017
- …