8 research outputs found

    Post-incident audits on cyber insurance discounts

    Get PDF
    We introduce a game-theoretic model to investigate the strategic interaction between a cyber insurance policyholder whose premium depends on her self-reported security level and an insurer with the power to audit the security level upon receiving an indemnity claim. Audits can reveal fraudulent (or simply careless) policyholders not following reported security procedures, in which case the insurer can refuse to indemnify the policyholder. However, the insurer has to bear an audit cost even when the policyholders have followed the prescribed security procedures. As audits can be expensive, a key problem insurers face is to devise an auditing strategy to deter policyholders from misrepresenting their security levels to gain a premium discount. This decision-making problem was motivated by conducting interviews with underwriters and reviewing regulatory filings in the US; we discovered that premiums are determined by security posture, yet this is often self-reported and insurers are concerned by whether security procedures are practised as reported by the policyholders. To address this problem, we model this interaction as a Bayesian game of incomplete information and devise optimal auditing strategies for the insurers considering the possibility that the policyholder may misrepresent her security level. To the best of our knowledge, this work is the first theoretical consideration of post-incident claims management in cyber security. Our model captures the trade-off between the incentive to exaggerate security posture during the application process and the possibility of punishment for non-compliance with reported security policies. Simulations demonstrate that common sense techniques are not as efficient at providing effective cyber insurance audit decisions as the ones computed using game theory

    HoneyCar: a framework to configure honeypot vulnerabilities on the internet of vehicles

    Get PDF
    The Internet of Vehicles (IoV), whereby interconnected vehicles that communicate with each other and with road infrastructure on a common network, has promising socio-economic benefits but also poses new cyber-physical threats. To protect these entities and learn about adversaries, data on attackers can be realistically gathered using decoy systems like honeypots. Admittedly, honeypots introduces a trade-off between the level of honeypot-attacker interactions and incurred overheads and costs for implementing and monitoring these systems. Deception through honeypots can be achieved by strategically configuring the honeypots to represent components of the IoV to engage attackers and collect cyber threat intelligence. Here, we present HoneyCar, a novel decision support framework for honeypot deception in IoV. HoneyCar benefits from the repository of known vulnerabilities of the autonomous and connected vehicles found in the Common Vulnerabilities and Exposure (CVE) database to compute optimal honeypot configuration strategies. The adversarial interaction is modelled as a repeated imperfect-information zero-sum game where the IoV network administrator strategically chooses a set of vulnerabilities to offer in a honeypot and a strategic attacker chooses a vulnerability to exploit under uncertainty. Our investigation examines two different versions of the game, with and without the re-configuration cost, to empower the network administrator to determine optimal honeypot investment strategies given a budget. We show the feasibility of this approach in a case study that consists of the vulnerabilities in autonomous and connected vehicles gathered from the CVE database and data extracted from the Common Vulnerability Scoring System (CVSS)

    Post-incident audits on cyber insurance discounts

    No full text
    We introduce a game-theoretic model to investigate the strategic interaction between a cyber insurance policyholder whose premium depends on her self-reported security level and an insurer with the power to audit the security level upon receiving an indemnity claim. Audits can reveal fraudulent (or simply careless) policyholders not following reported security procedures, in which case the insurer can refuse to indemnify the policyholder. However, the insurer has to bear an audit cost even when the policyholders have followed the prescribed security procedures. As audits can be expensive, a key problem insurers face is to devise an auditing strategy to deter policyholders from misrepresenting their security levels to gain a premium discount. This decision-making problem was motivated by conducting interviews with underwriters and reviewing regulatory filings in the US; we discovered that premiums are determined by security posture, yet this is often self-reported and insurers are concerned by whether security procedures are practised as reported by the policyholders. To address this problem, we model this interaction as a Bayesian game of incomplete information and devise optimal auditing strategies for the insurers considering the possibility that the policyholder may misrepresent her security level. To the best of our knowledge, this work is the first theoretical consideration of post-incident claims management in cyber security. Our model captures the trade-off between the incentive to exaggerate security posture during the application process and the possibility of punishment for non-compliance with reported security policies. Simulations demonstrate that common sense techniques are not as efficient at providing effective cyber insurance audit decisions as the ones computed using game theory
    corecore