8 research outputs found
Post-incident audits on cyber insurance discounts
We introduce a game-theoretic model to investigate the strategic interaction between a cyber insurance policyholder whose premium depends on her self-reported security level and an insurer with the power to audit the security level upon receiving an indemnity claim. Audits can reveal fraudulent (or simply careless) policyholders not following reported security procedures, in which case the insurer can refuse to indemnify the policyholder. However, the insurer has to bear an audit cost even when the policyholders have followed the prescribed security procedures. As audits can be expensive, a key problem insurers face is to devise an auditing strategy to deter policyholders from misrepresenting their security levels to gain a premium discount. This decision-making problem was motivated by conducting interviews with underwriters and reviewing regulatory filings in the US; we discovered that premiums are determined by security posture, yet this is often self-reported and insurers are concerned by whether security procedures are practised as reported by the policyholders.
To address this problem, we model this interaction as a Bayesian game of incomplete information and devise optimal auditing strategies for the insurers considering the possibility that the policyholder may misrepresent her security level. To the best of our knowledge, this work is the first theoretical consideration of post-incident claims management in cyber security. Our model captures the trade-off between the incentive to exaggerate security posture during the application process and the possibility of punishment for non-compliance with reported security policies. Simulations demonstrate that common sense techniques are not as efficient at providing effective cyber insurance audit decisions as the ones computed using game theory
Recommended from our members
MITRE ATT&CK-driven cyber risk assessment
Assessing the risk posed by Advanced Cyber Threats (APTs) is challenging without understanding the methods and tactics adversaries use to attack an organisation. The MITRE ATT&CK provides information on the motivation, capabilities, interests and tactics, techniques and procedures (TTPs) used by threat actors. In this paper, we leverage these characteristics of threat actors to support informed cyber risk characterisation and assessment. In particular, we utilise the MITRE repository of known adversarial TTPs along with attack graphs to determine the attack probability as well as the likelihood of success of an attack. We further identify attack paths with the highest likelihood of success considering the techniques and procedures of a threat actor. The assessment is supported by a case study of a health care organisation to identify the level of risk against two adversary groups– Lazarus and menuPass
Recommended from our members
Cyber Risk Assessment and Optimization: A Small Business Case Study
Assessing and controlling cyber risk is the cornerstone of information security management, but also a formidable challenge for organisations due to the uncertainties associated with attacks, the resulting risk exposure, and the availability of scarce resources for investment in mitigation measures. In this paper, we propose a cybersecurity decision-support framework, called CENSOR, for optimal cyber security investment. CENSOR accounts for the serial nature of a cyber attack, the uncertainty in the time required to exploit a vulnerability, and the optimisation of mitigation measures in the presence of a limited budget. First, we evaluate the cost that an organisation incurs due to a cyber security breach that progresses in stages and derive an analytical expression for the distribution of the present value of the cost. Second, we adopt a Set Covering and a Knapsack formulation to derive and compare optimal strategies for investment in mitigation measures. Third, we validate CENSOR via a case study of a small business (SB) based on: (i) the 2020 Common Weakness Enumeration (CWE) top 25 most dangerous software weaknesses; and (ii) the Center for Internet Security (CIS) Controls. Specifically, we demonstrate how the Knapsack formulation provides solutions that are both more affordable and entail lower risk compared to those of the Set Covering formulation. Interestingly, our results confirm that investing more in cybersecurity does not necessarily lead to an analogous cyber risk reduction, which indicates that the latter decelerates beyond a certain point of security investment intensity
HoneyCar: a framework to configure honeypot vulnerabilities on the internet of vehicles
The Internet of Vehicles (IoV), whereby interconnected vehicles that communicate with each other and with road infrastructure on a common network, has promising socio-economic benefits but also poses new cyber-physical threats. To protect these entities and learn about adversaries, data on attackers can be realistically gathered using decoy systems like honeypots. Admittedly, honeypots introduces a trade-off between the level of honeypot-attacker interactions and incurred overheads and costs for implementing and monitoring these systems. Deception through honeypots can be achieved by strategically configuring the honeypots to represent components of the IoV to engage attackers and collect cyber threat intelligence. Here, we present HoneyCar, a novel decision support framework for honeypot deception in IoV. HoneyCar benefits from the repository of known vulnerabilities of the autonomous and connected vehicles found in the Common Vulnerabilities and Exposure (CVE) database to compute optimal honeypot configuration strategies. The adversarial interaction is modelled as a repeated imperfect-information zero-sum game where the IoV network administrator strategically chooses a set of vulnerabilities to offer in a honeypot and a strategic attacker chooses a vulnerability to exploit under uncertainty. Our investigation examines two different versions of the game, with and without the re-configuration cost, to empower the network administrator to determine optimal honeypot investment strategies given a budget. We show the feasibility of this approach in a case study that consists of the vulnerabilities in autonomous and connected vehicles gathered from the CVE database and data extracted from the Common Vulnerability Scoring System (CVSS)
Recommended from our members
INCHAIN: a cyber insurance architecture with smart contracts and self-sovereign identity on top of blockchain
Despite the rapid growth of the cyber insurance market in recent years, insurance companies in this area face several challenges, such as a lack of data, a shortage of automated tasks, increased fraudulent claims from legal policyholders, attackers masquerading as legal policyholders, and insurance companies becoming targets of cybersecurity attacks due to the abundance of data they store. On top of that, there is a lack of Know Your Customer procedures. To address these challenges, in this article, we present INCHAIN, an innovative architecture that utilizes Blockchain technology to provide data transparency and traceability. The backbone of the architecture is complemented by Smart Contracts, which automate cyber insurance processes, and Self-Sovereign Identity for robust identification. The effectiveness of INCHAIN ’s architecture is compared with the literature against the challenges the cyber insurance industry faces. In a nutshell, our approach presents a significant advancement in the field of cyber insurance, as it effectively combats the issue of fraudulent claims and ensures proper customer identification and authentication. Overall, this research demonstrates a novel and effective solution to the complex problem of managing cyber insurance, providing a solid foundation for future developments in the field
Post-incident audits on cyber insurance discounts
We introduce a game-theoretic model to investigate the strategic interaction between a cyber insurance policyholder whose premium depends on her self-reported security level and an insurer with the power to audit the security level upon receiving an indemnity claim. Audits can reveal fraudulent (or simply careless) policyholders not following reported security procedures, in which case the insurer can refuse to indemnify the policyholder. However, the insurer has to bear an audit cost even when the policyholders have followed the prescribed security procedures. As audits can be expensive, a key problem insurers face is to devise an auditing strategy to deter policyholders from misrepresenting their security levels to gain a premium discount. This decision-making problem was motivated by conducting interviews with underwriters and reviewing regulatory filings in the US; we discovered that premiums are determined by security posture, yet this is often self-reported and insurers are concerned by whether security procedures are practised as reported by the policyholders. To address this problem, we model this interaction as a Bayesian game of incomplete information and devise optimal auditing strategies for the insurers considering the possibility that the policyholder may misrepresent her security level. To the best of our knowledge, this work is the first theoretical consideration of post-incident claims management in cyber security. Our model captures the trade-off between the incentive to exaggerate security posture during the application process and the possibility of punishment for non-compliance with reported security policies. Simulations demonstrate that common sense techniques are not as efficient at providing effective cyber insurance audit decisions as the ones computed using game theory