18,839 research outputs found

    Cyber Deception for Critical Infrastructure Resiliency

    Get PDF
    The high connectivity of modern cyber networks and devices has brought many improvements to the functionality and efficiency of networked systems. Unfortunately, these benefits have come with many new entry points for attackers, making systems much more vulnerable to intrusions. Thus, it is critically important to protect cyber infrastructure against cyber attacks. The static nature of cyber infrastructure leads to adversaries performing reconnaissance activities and identifying potential threats. Threats related to software vulnerabilities can be mitigated upon discovering a vulnerability and-, developing and releasing a patch to remove the vulnerability. Unfortunately, the period between discovering a vulnerability and applying a patch is long, often lasting five months or more. These delays pose significant risks to the organization while many cyber networks are operational. This concern necessitates the development of an active defense system capable of thwarting cyber reconnaissance missions and mitigating the progression of the attacker through the network. Thus, my research investigates how to develop an efficient defense system to address these challenges. First, we proposed the framework to show how the defender can use the network of decoys along with the real network to introduce mistrust. However, another research problem, the defender’s choice of whether to save resources or spend more (number of decoys) resources in a resource-constrained system, needs to be addressed. We developed a Dynamic Deception System (DDS) that can assess various attacker types based on the attacker’s knowledge, aggression, and stealthiness level to decide whether the defender should spend or save resources. In our DDS, we leveraged Software Defined Networking (SDN) to differentiate the malicious traffic from the benign traffic to deter the cyber reconnaissance mission and redirect malicious traffic to the deception server. Experiments conducted on the prototype implementation of our DDS confirmed that the defender could decide whether to spend or save resources based on the attacker types and thwarted cyber reconnaissance mission. Next, we addressed the challenge of efficiently placing network decoys by predicting the most likely attack path in Multi-Stage Attacks (MSAs). MSAs are cyber security threats where the attack campaign is performed through several attack stages and adversarial lateral movement is one of the critical stages. Adversaries can laterally move into the network without raising an alert. To prevent lateral movement, we proposed an approach that combines reactive (graph analysis) and proactive (cyber deception technology) defense. The proposed approach is realized through two phases. The first phase predicts the most likely attack path based on Intrusion Detection System (IDS) alerts and network trace. The second phase determines the optimal deployment of decoy nodes along the predicted path. We employ transition probabilities in a Hidden Markov Model to predict the path. In the second phase, we utilize the predicted attack path to deploy decoy nodes. The evaluation results show that our approach can predict the most likely attack paths and thwart adversarial lateral movement

    Attacker Capability Based Dynamic Deception Model for Large-Scale Networks

    Get PDF
    In modern days, cyber networks need continuous monitoring to keep the network secure and available to legitimate users. Cyber attackers use reconnaissance mission to collect critical network information and using that information, they make an advanced level cyber-attack plan. To thwart the reconnaissance mission and counterattack plan, the cyber defender needs to come up with a state-of-the-art cyber defense strategy. In this paper, we model a dynamic deception system (DDS) which will not only thwart reconnaissance mission but also steer the attacker towards fake network to achieve a fake goal state. In our model, we also capture the attacker’s capability using a belief matrix which is a joint probability distribution over the security states and attacker types. Experiments conducted on the prototype implementation of our DDS confirm that the defender can make the decision whether to spend more resources or save resources based on attacker types and thwart reconnaissance mission

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks

    SiMAMT: A Framework for Strategy-Based Multi-Agent Multi-Team Systems

    Get PDF
    Multi-agent multi-team systems are commonly seen in environments where hierarchical layers of goals are at play. For example, theater-wide combat scenarios where multiple levels of command and control are required for proper execution of goals from the general to the foot soldier. Similar structures can be seen in game environments, where agents work together as teams to compete with other teams. The different agents within the same team must, while maintaining their own ‘personality’, work together and coordinate with each other to achieve a common team goal. This research develops strategy-based multi-agent multi-team systems, where strategy is framed as an instrument at the team level to coordinate the multiple agents of a team in a cohesive way. A formal specification of strategy and strategy-based multi-agent multi-team systems is provided. A framework is developed called SiMAMT (strategy- based multi-agent multi-team systems). The different components of the framework, including strategy simulation, strategy inference, strategy evaluation, and strategy selection are described. A graph-matching approximation algorithm is also developed to support effective and efficient strategy inference. Examples and experimental results are given throughout to illustrate the proposed framework, including each of its composite elements, and its overall efficacy. This research make several contributions to the field of multi-agent multi-team systems: a specification for strategy and strategy-based systems, and a framework for implementing them in real-world, interactive-time scenarios; a robust simulation space for such complex and intricate interaction; an approximation algorithm that allows for strategy inference within these systems in interactive-time; experimental results that verify the various sub-elements along with a full-scale integration experiment showing the efficacy of the proposed framework
    • …
    corecore