18 research outputs found

    Towards an Organizationally-Relevant Quantification of Cyber Resilience

    Get PDF
    Given the difficulty of fully securing complex cyber systems, there is growing interest in making cyber systems resilient to the cyber threat. However, quantifying the resilience of a system in an organizationally-relevant manner remains a challenge. This paper describes initial research into a novel metric for quantifying the resilience of a system to cyber threats called the Resilience Index (RI). We calculate the RI via an effects-based discrete event stochastic simulation that runs a large number of trials over a designated mission timeline. During the trials, adverse cyber events (ACEs) occur against cyber assets in a target system. We consider a trial a failure if an ACE causes the performance of any of the target system’s mission essential functions (MEFs) to fall below its assigned threshold level. Once all trials have completed, the simulator computes the ratio of successful trials to the total number of trials, yielding RI. The linkage of ACEs to MEFs provides the organizational tie

    Increasing Test Coverage via Mediated Activation of Adverse Cyber Events in Software-Intensive Systems

    Get PDF
    This paper describes an approach for more comprehensively and systematically evaluating the effect of adverse cyber events (ACEs) on system performance of software-intensive systems as compared to conventional testing approaches. Traditional operationally-oriented testing, such as the use of cyber red teams, typically only explores a small portion of the system attack surface subject to ACEs, including malicious adversary action. Our approach involves making automated, minimally intrusive, and fully reversible modifications to a software system to be tested. The modifications introduce “operational test points” that allow a test manager to induce availability and integrity effects at runtime. During testing, observers can monitor system, user, and defender performance as the effects of ACEs unfold; such information provides in-sights into the resilience of the system to ACE effects. As a complement to traditional cyber-related testing, we estimate via a model that the approach allows for more comprehensive operational testing of a system over a full range of ACEs

    Estimating Software Vulnerability Counts in the Context of Cyber Risk Assessments

    Get PDF
    Stakeholders often conduct cyber risk assessments as a first step towards understanding and managing their risks due to cyber use. Many risk assessment methods in use today include some form of vulnerability analysis. Building on prior research and combining data from several sources, this paper develops and applies a metric to estimate the proportion of latent vulnerabilities to total vulnerabilities in a software system and applies the metric to five scenarios involving software on the scale of operating systems. The findings suggest caution in interpreting the results of cyber risk methodologies that depend on enumerating known software vulnerabilities because the number of unknown vulnerabilities in large-scale software tends to exceed known vulnerabilities

    Multi-Criteria Selection of Capability-Based Cybersecurity Solutions

    Get PDF
    Given the increasing frequency and severity of cyber attacks on information systems of all kinds, there is interest in rationalized approaches for selecting the “best” set of cybersecurity mitigations. However, what is best for one target environment is not necessarily best for another. This paper examines an approach to the selection that uses a set of weighted criteria, where the security engineer sets the weights based on organizational priorities and constraints. The approach is based on a capability-based representation for defensive solutions. The paper discusses a group of artifacts that compose the approach through the lens of Design Science research and reports performance results of an instantiation artifact

    Rigorous Validation of Systems Security Engineering Analytics

    Get PDF
    In response to the asymmetric advantage that attackers enjoy over defenders in cyber systems, the cyber community has generated a steady stream of cybersecurity-related frameworks, methodologies, analytics, and “best practices” lists. However, these artifacts almost never under-go rigorous validation of their efficacy but instead tend to be accepted on faith, to, we suggest, our collective detriment based on evidence of continued attacker success. But what would rigorous validation look like, and can we afford it? This paper describes the design and estimates the cost of a controlled experiment whose goal is to deter-mine the effectiveness of an exemplar systems security analytic. Given the significant footprint that humans play in cyber systems (e.g., their design, use, attack, and defense), any such experiment must necessarily take into account and control for variable human behavior. Thus, the paper reinforces the argument that cybersecurity can be understood as a hybrid discipline with strong technical and human dimensions
    corecore