2,597 research outputs found

    Issues in Defense Economics

    Get PDF

    Issues in Defense Economics

    Get PDF

    Cyber security research frameworks for coevolutionary network defense

    Get PDF
    Cyber security is increasingly a challenge for organizations everywhere. Defense systems that require less expert knowledge and can adapt quickly to threats are strongly needed to combat the rise of cyber attacks. Computational intelligence techniques can be used to rapidly explore potential solutions while searching in a way that is unaffected by human bias. Several architectures have been created for developing and testing systems used in network security, but most are meant to provide a platform for running cyber security experiments as opposed to automating experiment processes. In the first paper, we propose a framework termed Distributed Cyber Security Automation Framework for Experiments (DCAFE) that enables experiment automation and control in a distributed environment. Predictive analysis of adversaries is another thorny issue in cyber security. Game theory can be used to mathematically analyze adversary models, but its scalability limitations restrict its use. Computational game theory allows us to scale classical game theory to larger, more complex systems. In the second paper, we propose a framework termed Coevolutionary Agent-based Network Defense Lightweight Event System (CANDLES) that can coevolve attacker and defender agent strategies and capabilities and evaluate potential solutions with a custom network defense simulation. The third paper is a continuation of the CANDLES project in which we rewrote key parts of the framework. Attackers and defenders have been redesigned to evolve pure strategy, and a new network security simulation is devised which specifies network architecture and adds a temporal aspect. We also add a hill climber algorithm to evaluate the search space and justify the use of a coevolutionary algorithm --Abstract, page iv

    Cyber Analogies

    Get PDF
    This anthology of cyber analogies will resonate with readers whose duties call for them to set strategies to protect the virtual domain and determine the policies that govern it. Our belief is that learning is most effective when concepts under consideration can be aligned with already-existing understanding or knowledge. Cyber issues are inherently tough to explain in layman's terms. The future is always open and undetermined, and the numbers of actors and the complexity of their relations are too great to give definitive guidance about future developments. In this respect, historical analogies, carefully developed and properly applied, help indicate a direction for action by reducing complexity and making the future at least cognately manageable.US Cyber CommandIntroduction: Emily O. Goldman & John Arquilla; The Cyber Pearl Harbor:James J. Wirtz: Applying the Historical Lessons of Surprise Attack to the Cyber Domain: The Example of the United Kingdom:Dr Michael S. Goodman: The Cyber Pearl Harbor Analogy: An Attacker’s Perspective: Emily O. Goldman, John Surdu, & Michael Warner: “When the Urgency of Time and Circumstances Clearly Does Not Permit...”: Redelegation in Nuclear and Cyber Scenarios: Peter Feaver & Kenneth Geers; Comparing Airpower and Cyberpower: Dr. Gregory Rattray: Active Cyber Defense: Applying Air Defense to the Cyber Domain: Dorothy E. Denning & Bradley J. Strawser: The Strategy of Economic Warfare: A Historical Case Study and Possible Analogy to: Contemporary Cyber Warfare: Nicholas A. Lambert: Silicon Valley: Metaphor for Cybersecurity, Key to Understanding Innovation War: John Kao: The Offense-Defense Balance and Cyber Warfare: Keir Lieber: A Repertory of Cyber Analogies: Robert Axelro

    A Game Theoretic Model for the Optimal Disposition of Integrated Air Defense System Assets

    Get PDF
    We examine the optimal allocation of Integrated Air Defense System (IADS) resources to protect a country\u27s assets, formulated as a Defender-Attacker-Defender three-stage sequential, perfect information, zero-sum game between two opponents. We formulate a trilevel nonlinear integer program for this Defender-Attacker-Defender model and seek a subgame perfect Nash equilibrium, for which neither the defender nor the attacker has an incentive to deviate from their respective strategies. Such a trilevel formulation is not solvable via conventional optimization software and an exhaustive enumeration of the game tree based on the discrete set of strategies is intractable for large problem sizes. As such, we test and evaluate variants of a tree pruning algorithm and a customized heuristic, which we benchmark against an exhaustive enumeration. Our tests demonstrate that the pruning strategy is not efficient enough to scale up to a larger problem. We then demonstrate the scalability of the heuristic to show that the model can be applied to a realistic size problem

    Learning About Simulated Adversaries from Human Defenders using Interactive Cyber-Defense Games

    Full text link
    Given the increase in cybercrime, cybersecurity analysts (i.e. Defenders) are in high demand. Defenders must monitor an organization's network to evaluate threats and potential breaches into the network. Adversary simulation is commonly used to test defenders' performance against known threats to organizations. However, it is unclear how effective this training process is in preparing defenders for this highly demanding job. In this paper, we demonstrate how to use adversarial algorithms to investigate defenders' learning of defense strategies, using interactive cyber defense games. Our Interactive Defense Game (IDG) represents a cyber defense scenario that requires constant monitoring of incoming network alerts and allows a defender to analyze, remove, and restore services based on the events observed in a network. The participants in our study faced one of two types of simulated adversaries. A Beeline adversary is a fast, targeted, and informed attacker; and a Meander adversary is a slow attacker that wanders the network until it finds the right target to exploit. Our results suggest that although human defenders have more difficulty to stop the Beeline adversary initially, they were able to learn to stop this adversary by taking advantage of their attack strategy. Participants who played against the Beeline adversary learned to anticipate the adversary and take more proactive actions, while decreasing their reactive actions. These findings have implications for understanding how to help cybersecurity analysts speed up their training.Comment: Submitted to Journal of Cybersecurit
    • …
    corecore