18 research outputs found

    A decision support system for corporations cyber security risk management

    Get PDF
    This thesis presents a decision aiding system named C3-SEC (Contex-aware Corporative Cyber Security), developed in the context of a master program at Polytechnic Institute of Leiria, Portugal. The research dimension and the corresponding software development process that followed are presented and validated with an application scenario and case study performed at Universidad de las Fuerzas Armadas ESPE – Ecuador. C3-SEC is a decision aiding software intended to support cyber risks and cyber threats analysis of a corporative information and communications technological infrastructure. The resulting software product will help corporations Chief Information Security Officers (CISO) on cyber security risk analysis, decision-making and prevention measures for the infrastructure and information assets protection. The work is initially focused on the evaluation of the most popular and relevant tools available for risk assessment and decision making in the cyber security domain. Their properties, metrics and strategies are studied and their support for cyber security risk analysis, decision-making and prevention is assessed for the protection of organization's information assets. A contribution for cyber security experts decision support is then proposed by the means of reuse and integration of existing tools and C3-SEC software. C3-SEC extends existing tools features from the data collection and data analysis (perception) level to a full context-ware reference model. The software developed makes use of semantic level, ontology-based knowledge representation and inference supported by widely adopted standards, as well as cyber security standards (CVE, CPE, CVSS, etc.) and cyber security information data sources made available by international authorities, to share and exchange information in this domain. C3-SEC development follows a context-aware systems reference model addressing the perception, comprehension, projection and decision/action layers to create corporative scale cyber security situation awareness

    An approach towards standardising vulnerability categories

    Get PDF
    Computer vulnerabilities are design flaws, implementation or configuration errors that provide a means of exploiting a system or network that would not be available otherwise. The recent growth in the number of vulnerability scanning (VS) tools and independent vulnerability databases points to an apparent need for further means to protect computer systems from compromise. It is important for these tools and databases to interpret, correlate and exchange a large amount of information about computer vulnerabilities in order to use them effectively. However, this goal is hard to achieve because the current VS products differ extensively both in the way that they can detect vulnerabilities and in the number of vulnerabilities that they can detect. Each tool or database represents, identifies and classifies vulnerabilities in its own way, thus making them difficult to study and compare. Although the list of Common Vulnerabilities and Exposures (CVE) provides a means of solving the disparity in vulnerability names used in the different VS products, it does not standardise vulnerability categories. This dissertation highlights the importance of having a standard vulnerability category set and outlines an approach towards achieving this goal by categorising the CVE repository using a data-clustering algorithm. Prototypes are presented to verify the concept of standardizing vulnerability categories and how this can be used as the basis for comparison of VS products and improving scan reports.Dissertation (MSc (Computer Science))--University of Pretoria, 2008.Computer Scienceunrestricte

    Attack graph approach to dynamic network vulnerability analysis and countermeasures

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIt is widely accepted that modern computer networks (often presented as a heterogeneous collection of functioning organisations, applications, software, and hardware) contain vulnerabilities. This research proposes a new methodology to compute a dynamic severity cost for each state. Here a state refers to the behaviour of a system during an attack; an example of a state is where an attacker could influence the information on an application to alter the credentials. This is performed by utilising a modified variant of the Common Vulnerability Scoring System (CVSS), referred to as a Dynamic Vulnerability Scoring System (DVSS). This calculates scores of intrinsic, time-based, and ecological metrics by combining related sub-scores and modelling the problem’s parameters into a mathematical framework to develop a unique severity cost. The individual static nature of CVSS affects the scoring value, so the author has adapted a novel model to produce a DVSS metric that is more precise and efficient. In this approach, different parameters are used to compute the final scores determined from a number of parameters including network architecture, device setting, and the impact of vulnerability interactions. An attack graph (AG) is a security model representing the chains of vulnerability exploits in a network. A number of researchers have acknowledged the attack graph visual complexity and a lack of in-depth understanding. Current attack graph tools are constrained to only limited attributes or even rely on hand-generated input. The automatic formation of vulnerability information has been troublesome and vulnerability descriptions are frequently created by hand, or based on limited data. The network architectures and configurations along with the interactions between the individual vulnerabilities are considered in the method of computing the Cost using the DVSS and a dynamic cost-centric framework. A new methodology was built up to present an attack graph with a dynamic cost metric based on DVSS and also a novel methodology to estimate and represent the cost-centric approach for each host’ states was followed out. A framework is carried out on a test network, using the Nessus scanner to detect known vulnerabilities, implement these results and to build and represent the dynamic cost centric attack graph using ranking algorithms (in a standardised fashion to Mehta et al. 2006 and Kijsanayothin, 2010). However, instead of using vulnerabilities for each host, a CostRank Markov Model has developed utilising a novel cost-centric approach, thereby reducing the complexity in the attack graph and reducing the problem of visibility. An analogous parallel algorithm is developed to implement CostRank. The reason for developing a parallel CostRank Algorithm is to expedite the states ranking calculations for the increasing number of hosts and/or vulnerabilities. In the same way, the author intends to secure large scale networks that require fast and reliable computing to calculate the ranking of enormous graphs with thousands of vertices (states) and millions of arcs (representing an action to move from one state to another). In this proposed approach, the focus on a parallel CostRank computational architecture to appraise the enhancement in CostRank calculations and scalability of of the algorithm. In particular, a partitioning of input data, graph files and ranking vectors with a load balancing technique can enhance the performance and scalability of CostRank computations in parallel. A practical model of analogous CostRank parallel calculation is undertaken, resulting in a substantial decrease in calculations communication levels and in iteration time. The results are presented in an analytical approach in terms of scalability, efficiency, memory usage, speed up and input/output rates. Finally, a countermeasures model is developed to protect against network attacks by using a Dynamic Countermeasures Attack Tree (DCAT). The following scheme is used to build DCAT tree (i) using scalable parallel CostRank Algorithm to determine the critical asset, that system administrators need to protect; (ii) Track the Nessus scanner to determine the vulnerabilities associated with the asset using the dynamic cost centric framework and DVSS; (iii) Check out all published mitigations for all vulnerabilities. (iv) Assess how well the security solution mitigates those risks; (v) Assess DCAT algorithm in terms of effective security cost, probability and cost/benefit analysis to reduce the total impact of a specific vulnerability

    Machine Learning and Probabilistic Methods for Network Security Assessment

    Get PDF
    Ph. D. ThesisComputer networks comprised of many hosts are vulnerable to cyber attacks. One attack can take the form of the exploitation of multiple vulnerabilities in the network along with lateral movement between hosts. In order to analyse the security of a network, it is common practice to run a vulnerability scan to report the presence of vulnerabilities in the network and prioritise them with an importance score. The scoring mechanism used primarily in the literature and in industry ignores how multiple vulnerabilities could be used in conjunction with one another to achieve a goal that previously was not possible. Attack graphs are a common solution to this problem, where a scan along with the topology of the network is turned into a graph that models how hosts and vulnerabilities can be connected. For a large network these attack graphs can be thousands of nodes in size, so in order to gain insight from them in an automated way, they can be turned into Bayesian attack graphs (BAGs) to model the security of the network probabilistically. The aim of this thesis is to work towards the automation of gathering insight from vulnerability scans of a network, primarily through the generation of BAGs. The main contributions of this thesis are as follows: 1. Creation of a unified formalism for the structure of BAGs and how other graphs can be translated into this formalism. 2. Classification of vulnerabilities using neural networks. 3. Design and evaluation of a novel technique for approximation in the computation of access probabilities in BAGs (referred to in the literature as the static analysis of BAGs) with no requirement for the base graph to be acyclic. 4. Implementation and comparison of three stochastic simulation techniques for inference on BAGs with evidence (referred to in the literature as the dynamic analysis of BAGs), enabling security measure evaluation and sensitivity analysis. 5. Demonstration of a sensitivity analysis for BAG priors and a novel method for quick computation of sensitivities that is more readily analysed than the traditional technique. 6. Development and demonstration of a fully containerised pipeline to automatically process vulnerability scans and generate the corresponding attack graph. With a single formalism for attack graphs, alongside an open-source attack graph generation pipeline, our work serves to enable future progress and collaboration in the field of processing vulnerability scans using attack graphs by simplifying the process of generating the graphs and having a mathematical basis for their evaluation. We design, implement, and evaluate various techniques for calculations on BAGs. For the process of computation of access probabilities we provide an algorithm that requires no processing or trimming of the initial graph, and for inference on BAGs we recommend likelihood weighting as the best performing sampling technique of the three we implement. We also show how inference techniques can be applied to sensitivity analysis on BAGs, and provide a new method that allows for more efficient and interpretable sensitivity analysis, enabling more productive research into the area in future. This research was originally undertaken in collaboration with XQ Cyber.EPSR

    A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities

    Get PDF
    The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia

    AVOIDIT IRS: An Issue Resolution System To Resolve Cyber Attacks

    Get PDF
    Cyber attacks have greatly increased over the years and the attackers have progressively improved in devising attacks against specific targets. Cyber attacks are considered a malicious activity launched against networks to gain unauthorized access causing modification, destruction, or even deletion of data. This dissertation highlights the need to assist defenders with identifying and defending against cyber attacks. In this dissertation an attack issue resolution system is developed called AVOIDIT IRS (AIRS). AVOIDIT IRS is based on the attack taxonomy AVOIDIT (Attack Vector, Operational Impact, Defense, Information Impact, and Target). Attacks are collected by AIRS and classified into their respective category using AVOIDIT.Accordingly, an organizational cyber attack ontology was developed using feedback from security professionals to improve the communication and reusability amongst cyber security stakeholders. AIRS is developed as a semi-autonomous application that extracts unstructured external and internal attack data to classify attacks in sequential form. In doing so, we designed and implemented a frequent pattern and sequential classification algorithm associated with the five classifications in AVOIDIT. The issue resolution approach uses inference to educate the defender on the plausible cyber attacks. The AIRS can work in conjunction with an intrusion detection system (IDS) to provide a heuristic to cyber security breaches within an organization. AVOIDIT provides a framework for classifying appropriate attack information, which is fundamental in devising defense strategies against such cyber attacks. The AIRS is further used as a knowledge base in a game inspired defense architecture to promote game model selection upon attack identification. Future work will incorporate honeypot attack information to improve attack identification, classification, and defense propagation.In this dissertation, 1,025 common vulnerabilities and exposures (CVEs) and over 5,000 lines of log files instances were captured in the AIRS for analysis. Security experts were consulted to create rules to extract pertinent information and algorithms to correlate identified data for notification. The AIRS was developed using the Codeigniter [74] framework to provide a seamless visualization tool for data mining regarding potential cyber attacks relative to web applications. Testing of the AVOIDIT IRS revealed a recall of 88%, precision of 93%, and a 66% correlation metric

    Identifying and Mitigating Security Risks in Multi-Level Systems-of-Systems Environments

    Get PDF
    In recent years, organisations, governments, and cities have taken advantage of the many benefits and automated processes Information and Communication Technology (ICT) offers, evolving their existing systems and infrastructures into highly connected and complex Systems-of-Systems (SoS). These infrastructures endeavour to increase robustness and offer some resilience against single points of failure. The Internet, Wireless Sensor Networks, the Internet of Things, critical infrastructures, the human body, etc., can all be broadly categorised as SoS, as they encompass a wide range of differing systems that collaborate to fulfil objectives that the distinct systems could not fulfil on their own. ICT constructed SoS face the same dangers, limitations, and challenges as those of traditional cyber based networks, and while monitoring the security of small networks can be difficult, the dynamic nature, size, and complexity of SoS makes securing these infrastructures more taxing. Solutions that attempt to identify risks, vulnerabilities, and model the topologies of SoS have failed to evolve at the same pace as SoS adoption. This has resulted in attacks against these infrastructures gaining prevalence, as unidentified vulnerabilities and exploits provide unguarded opportunities for attackers to exploit. In addition, the new collaborative relations introduce new cyber interdependencies, unforeseen cascading failures, and increase complexity. This thesis presents an innovative approach to identifying, mitigating risks, and securing SoS environments. Our security framework incorporates a number of novel techniques, which allows us to calculate the security level of the entire SoS infrastructure using vulnerability analysis, node property aspects, topology data, and other factors, and to improve and mitigate risks without adding additional resources into the SoS infrastructure. Other risk factors we examine include risks associated with different properties, and the likelihood of violating access control requirements. Extending the principals of the framework, we also apply the approach to multi-level SoS, in order to improve both SoS security and the overall robustness of the network. In addition, the identified risks, vulnerabilities, and interdependent links are modelled by extending network modelling and attack graph generation methods. The proposed SeCurity Risk Analysis and Mitigation Framework and principal techniques have been researched, developed, implemented, and then evaluated via numerous experiments and case studies. The subsequent results accomplished ascertain that the framework can successfully observe SoS and produce an accurate security level for the entire SoS in all instances, visualising identified vulnerabilities, interdependencies, high risk nodes, data access violations, and security grades in a series of reports and undirected graphs. The framework’s evolutionary approach to mitigating risks and the robustness function which can determine the appropriateness of the SoS, revealed promising results, with the framework and principal techniques identifying SoS topologies, and quantifying their associated security levels. Distinguishing SoS that are either optimally structured (in terms of communication security), or cannot be evolved as the applied processes would negatively impede the security and robustness of the SoS. Likewise, the framework is capable via evolvement methods of identifying SoS communication configurations that improve communication security and assure data as it traverses across an unsecure and unencrypted SoS. Reporting enhanced SoS configurations that mitigate risks in a series of undirected graphs and reports that visualise and detail the SoS topology and its vulnerabilities. These reported candidates and optimal solutions improve the security and SoS robustness, and will support the maintenance of acceptable and tolerable low centrality factors, should these recommended configurations be applied to the evaluated SoS infrastructure

    Cyber Defense Remediation in Energy Delivery Systems

    Get PDF
    The integration of Information Technology (IT) and Operational Technology (OT) in Cyber-Physical Systems (CPS) has resulted in increased efficiency and facilitated real-time information acquisition, processing, and decision making. However, the increase in automation technology and the use of the internet for connecting, remote controlling, and supervising systems and facilities has also increased the likelihood of cybersecurity threats that can impact safety of humans and property. There is a need to assess cybersecurity risks in the power grid, nuclear plants, chemical factories, etc. to gain insight into the likelihood of safety hazards. Quantitative cybersecurity risk assessment will lead to informed cyber defense remediation and will ensure the presence of a mitigation plan to prevent safety hazards. In this dissertation, using Energy Delivery Systems (EDS) as a use case to contextualize a CPS, we address key research challenges in managing cyber risk for cyber defense remediation. First, we developed a platform for modeling and analyzing the effect of cyber threats and random system faults on EDS\u27s safety that could lead to catastrophic damages. We developed a data-driven attack graph and fault graph-based model to characterize the exploitability and impact of threats in EDS. We created an operational impact assessment to quantify the damages. Finally, we developed a strategic response decision capability that presents optimal mitigation actions and policies that balance the tradeoff between operational resilience (tactical risk) and strategic risk. Next, we addressed the challenge of management of tactical risk based on a prioritized cyber defense remediation plan. A prioritized cyber defense remediation plan is critical for effective risk management in EDS. Due to EDS\u27s complexity in terms of the heterogeneous nature of blending IT and OT and Industrial Control System (ICS), scale, and critical processes tasks, prioritized remediation should be applied gradually to protect critical assets. We proposed a methodology for prioritizing cyber risk remediation plans by detecting and evaluating critical EDS nodes\u27 paths. We conducted evaluation of critical nodes characteristics based on nodes\u27 architectural positions, measure of centrality based on nodes\u27 connectivity and frequency of network traffic, as well as the controlled amount of electrical power. The model also examines the relationship between cost models of budget allocation for removing vulnerabilities on critical nodes and their impact on gradual readiness. The proposed cost models were empirically validated in an existing network ICS test-bed computing nodes criticality. Two cost models were examined, and although varied, we concluded the lack of correlation between types of cost models to most damageable attack path and critical nodes readiness. Finally, we proposed a time-varying dynamical model for the cyber defense remediation in EDS. We utilize the stochastic evolutionary game model to simulate the dynamic adversary of cyber-attack-defense. We leveraged the Logit Quantal Response Dynamics (LQRD) model to quantify real-world players\u27 cognitive differences. We proposed the optimal decision making approach by calculating the stable evolutionary equilibrium and balancing defense costs and benefits. Case studies on EDS indicate that the proposed method can help the defender predict possible attack action, select the related optimal defense strategy over time, and gain the maximum defense payoffs. We also leveraged software-defined networking (SDN) in EDS for dynamical cyber defense remediation. We presented an approach to aid the selection security controls dynamically in an SDN-enabled EDS and achieve tradeoffs between providing security and Quality of Service (QoS). We modeled the security costs based on end-to-end packet delay and throughput. We proposed a non-dominated sorting based multi-objective optimization framework which can be implemented within an SDN controller to address the joint problem of optimizing between security and QoS parameters by alleviating time complexity at O(MN2). The M is the number of objective functions, and N is the population for each generation, respectively. We presented simulation results that illustrate how data availability and data integrity can be achieved while maintaining QoS constraints

    Security Analysis of System Behaviour - From "Security by Design" to "Security at Runtime" -

    Get PDF
    The Internet today provides the environment for novel applications and processes which may evolve way beyond pre-planned scope and purpose. Security analysis is growing in complexity with the increase in functionality, connectivity, and dynamics of current electronic business processes. Technical processes within critical infrastructures also have to cope with these developments. To tackle the complexity of the security analysis, the application of models is becoming standard practice. However, model-based support for security analysis is not only needed in pre-operational phases but also during process execution, in order to provide situational security awareness at runtime. This cumulative thesis provides three major contributions to modelling methodology. Firstly, this thesis provides an approach for model-based analysis and verification of security and safety properties in order to support fault prevention and fault removal in system design or redesign. Furthermore, some construction principles for the design of well-behaved scalable systems are given. The second topic is the analysis of the exposition of vulnerabilities in the software components of networked systems to exploitation by internal or external threats. This kind of fault forecasting allows the security assessment of alternative system configurations and security policies. Validation and deployment of security policies that minimise the attack surface can now improve fault tolerance and mitigate the impact of successful attacks. Thirdly, the approach is extended to runtime applicability. An observing system monitors an event stream from the observed system with the aim to detect faults - deviations from the specified behaviour or security compliance violations - at runtime. Furthermore, knowledge about the expected behaviour given by an operational model is used to predict faults in the near future. Building on this, a holistic security management strategy is proposed. The architecture of the observing system is described and the applicability of model-based security analysis at runtime is demonstrated utilising processes from several industrial scenarios. The results of this cumulative thesis are provided by 19 selected peer-reviewed papers
    corecore