2,681 research outputs found

    Design of Dynamic and Personalized Deception: A Research Framework and New Insights

    Get PDF
    Deceptive defense techniques (e.g., intrusion detection, firewalls, honeypots, honeynets) are commonly used to prevent cyberattacks. However, most current defense techniques are generic and static, and are often learned and exploited by attackers. It is important to advance from static to dynamic forms of defense that can actively adapt a defense strategy according to the actions taken by individual attackers during an active attack. Our novel research approach relies on cognitive models and experimental games: Cognitive models aim at replicating an attacker’s behavior allowing the creation of personalized, dynamic deceptive defense strategies; experimental games help study human actions, calibrate cognitive models, and validate deceptive strategies. In this paper we offer the following contributions: (i) a general research framework for the design of dynamic, adaptive and personalized deception strategies for cyberdefense; (ii) a summary of major insights from experiments and cognitive models developed for security games of increased complexity; and (iii) a taxonomy of potential deception strategies derived from our research program so far

    Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning in Cybersecurity Games

    Full text link
    Designing cyber defense systems to account for cognitive biases in human decision making has demonstrated significant success in improving performance against human attackers. However, much of the attention in this area has focused on relatively simple accounts of biases in human attackers, and little is known about adversarial behavior or how defenses could be improved by disrupting attacker's behavior. In this work, we present a novel model of human decision-making inspired by the cognitive faculties of Instance-Based Learning Theory, Theory of Mind, and Transfer of Learning. This model functions by learning from both roles in a security scenario: defender and attacker, and by making predictions of the opponent's beliefs, intentions, and actions. The proposed model can better defend against attacks from a wide range of opponents compared to alternatives that attempt to perform optimally without accounting for human biases. Additionally, the proposed model performs better against a range of human-like behavior by explicitly modeling human transfer of learning, which has not yet been applied to cyber defense scenarios. Results from simulation experiments demonstrate the potential usefulness of cognitively inspired models of agents trained in attack and defense roles and how these insights could potentially be used in real-world cybersecurity
    • …
    corecore