49 research outputs found

    A Security, Privacy and Trust Methodology for IIoT

    Get PDF
    The implements of IoT and industrial IoT (IIoT) are increasingly becoming the consensus with Industry 4.0. Relevant data-driven methodologies are typically concentrated on the scoring systems of CVE prioritization schemes, the scoring formulas of CVSS metrics, and other vulnerability impact factors. However, these prioritized lists such as the CWE/SANS Top 25 suffer from a critical weakness: they fail to consider empirical evidence of exploits. Considering the distinct properties and specific risks of SCADA systems in IIoT, this paper overcomes the inherent limitation of IIoT empirical research which is the sample size of exploits by collecting data manually. This study then developed an exploits factors-embedded regression model to statistically access the significant relationships between security, privacy, and trust-based vulnerability attributes. Through this data-driven empirical methodology, the study elucidated the interactions of security, privacy, and trust in IIoT with professional quantitative indicators, which would provide grounds for substantial further related work. In addition to the security privacy and trust regression analysis, this study further explores the impact of IoT and IIoT by difference-in-difference (DID) approach, applying bootstrap standard error with Kernel option and quantile DID test to evaluate the robustness of DID model. In general, the empirical results indicated that: 1) the CVSS score of vulnerability is irrelevant to the disclosure of exploits, but is positively correlated with CWEs by Density and CVE year, 2) among the exploits of SCADA-related authors, the more identical CWEs that exist in these exploits, the higher the CVSS score of the exploit CVE will be, and CVE year has a negative moderating effect within this relationship; 3) the CVSS scores of SCADA exploits have significantly decreased in comparison with non-SCADA after the promulgation of Industry 4.0

    Attack graph approach to dynamic network vulnerability analysis and countermeasures

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIt is widely accepted that modern computer networks (often presented as a heterogeneous collection of functioning organisations, applications, software, and hardware) contain vulnerabilities. This research proposes a new methodology to compute a dynamic severity cost for each state. Here a state refers to the behaviour of a system during an attack; an example of a state is where an attacker could influence the information on an application to alter the credentials. This is performed by utilising a modified variant of the Common Vulnerability Scoring System (CVSS), referred to as a Dynamic Vulnerability Scoring System (DVSS). This calculates scores of intrinsic, time-based, and ecological metrics by combining related sub-scores and modelling the problem’s parameters into a mathematical framework to develop a unique severity cost. The individual static nature of CVSS affects the scoring value, so the author has adapted a novel model to produce a DVSS metric that is more precise and efficient. In this approach, different parameters are used to compute the final scores determined from a number of parameters including network architecture, device setting, and the impact of vulnerability interactions. An attack graph (AG) is a security model representing the chains of vulnerability exploits in a network. A number of researchers have acknowledged the attack graph visual complexity and a lack of in-depth understanding. Current attack graph tools are constrained to only limited attributes or even rely on hand-generated input. The automatic formation of vulnerability information has been troublesome and vulnerability descriptions are frequently created by hand, or based on limited data. The network architectures and configurations along with the interactions between the individual vulnerabilities are considered in the method of computing the Cost using the DVSS and a dynamic cost-centric framework. A new methodology was built up to present an attack graph with a dynamic cost metric based on DVSS and also a novel methodology to estimate and represent the cost-centric approach for each host’ states was followed out. A framework is carried out on a test network, using the Nessus scanner to detect known vulnerabilities, implement these results and to build and represent the dynamic cost centric attack graph using ranking algorithms (in a standardised fashion to Mehta et al. 2006 and Kijsanayothin, 2010). However, instead of using vulnerabilities for each host, a CostRank Markov Model has developed utilising a novel cost-centric approach, thereby reducing the complexity in the attack graph and reducing the problem of visibility. An analogous parallel algorithm is developed to implement CostRank. The reason for developing a parallel CostRank Algorithm is to expedite the states ranking calculations for the increasing number of hosts and/or vulnerabilities. In the same way, the author intends to secure large scale networks that require fast and reliable computing to calculate the ranking of enormous graphs with thousands of vertices (states) and millions of arcs (representing an action to move from one state to another). In this proposed approach, the focus on a parallel CostRank computational architecture to appraise the enhancement in CostRank calculations and scalability of of the algorithm. In particular, a partitioning of input data, graph files and ranking vectors with a load balancing technique can enhance the performance and scalability of CostRank computations in parallel. A practical model of analogous CostRank parallel calculation is undertaken, resulting in a substantial decrease in calculations communication levels and in iteration time. The results are presented in an analytical approach in terms of scalability, efficiency, memory usage, speed up and input/output rates. Finally, a countermeasures model is developed to protect against network attacks by using a Dynamic Countermeasures Attack Tree (DCAT). The following scheme is used to build DCAT tree (i) using scalable parallel CostRank Algorithm to determine the critical asset, that system administrators need to protect; (ii) Track the Nessus scanner to determine the vulnerabilities associated with the asset using the dynamic cost centric framework and DVSS; (iii) Check out all published mitigations for all vulnerabilities. (iv) Assess how well the security solution mitigates those risks; (v) Assess DCAT algorithm in terms of effective security cost, probability and cost/benefit analysis to reduce the total impact of a specific vulnerability

    Analysis of Bulk Power System Resilience Using Vulnerability Graph

    Get PDF
    Critical infrastructure such as a Bulk Power System (BPS) should have some quantifiable measure of resiliency and definite rule-sets to achieve a certain resilience value. Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) networks are integral parts of BPS. BPS or ICS are themselves not vulnerable because of their proprietary technology, but when the control network and the corporate network need to have communications for performance measurements and reporting, the ICS or BPS become vulnerable to cyber-attacks. Thus, a systematic way of quantifying resiliency and identifying crucial nodes in the network is critical for addressing the cyber resiliency measurement process. This can help security analysts and power system operators in the decision-making process. This thesis focuses on the resilience analysis of BPS and proposes a ranking algorithm to identify critical nodes in the network. Although there are some ranking algorithms already in place, but they lack comprehensive inclusion of the factors that are critical in the cyber domain. This thesis has analyzed a range of factors which are critical from the point of view of cyber-attacks and come up with a MADM (Multi-Attribute Decision Making) based ranking method. The node ranking process will not only help improve the resilience but also facilitate hardening the network from vulnerabilities and threats. The proposed method is called MVNRank which stands for Multiple Vulnerability Node Rank. MVNRank algorithm takes into account the asset value of the hosts, the exploitability and impact scores of vulnerabilities as quantified by CVSS (Common Vulnerability Scoring System). It also considers the total number of vulnerabilities and severity level of each vulnerability, degree centrality of the nodes in vulnerability graph and the attacker’s distance from the target node. We are using a multi-layered directed acyclic graph (DAG) model and ranking the critical nodes in the corporate and control network which falls in the paths to the target ICS. We don\u27t rank the ICS nodes but use them to calculate the potential power loss capability of the control center nodes using the assumed ICS connectivity to BPS. Unlike most of the works, we have considered multiple vulnerabilities for each node in the network while generating the rank by using a weighted average method. The resilience computation is highly time consuming as it considers all the possible attack paths from the source to the target node which increases in a multiplicative manner based on the number of nodes and vulnerabilities. Thus, one of the goals of this thesis is to reduce the simulation time to compute resilience which is achieved as illustrated in the simulation results

    RISK WEIGHTED VULNERABILITY ANALYSIS IN AUTOMATED RED TEAMING

    Get PDF
    The Cyber Automated Red Team Tool (CARTT) automates red teaming tasks, such as conducting vulnerabilities analysis in DOD networks. The tool provides its users with recommendations to either mitigate cyber threats against identified vulnerabilities or with options to exploit those vulnerabilities using cyber-attack actions. Previous versions of CARTT, however, did not consider a risk weighting of identified vulnerabilities before the exploitation phase. This thesis focused on extending CARTT by implementing a risk weighted framework that provides a risk-based analysis of identified vulnerabilities. The framework is based on the Host Exposure algorithm presented by the Naval Research Laboratory and was built into the existing CARTT server using the Python programming language. The resulting risk-based analysis of vulnerabilities is presented to the CARTT user in an easily readable table that provides more complete and actionable information. The implementation of this risk-weighted framework provides CARTT with enhanced analysis of vulnerabilities that pose the greatest risk to a target network.Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    Countering Cybersecurity Vulnerabilities in the Power System

    Get PDF
    Security vulnerabilities in software pose an important threat to power grid security, which can be exploited by attackers if not properly addressed. Every month, many vulnerabilities are discovered and all the vulnerabilities must be remediated in a timely manner to reduce the chance of being exploited by attackers. In current practice, security operators have to manually analyze each vulnerability present in their assets and determine the remediation actions in a short time period, which involves a tremendous amount of human resources for electric utilities. To solve this problem, we propose a machine learning-based automation framework to automate vulnerability analysis and determine the remediation actions for electric utilities. Then the determined remediation actions will be applied to the system to remediate vulnerabilities. However, not all vulnerabilities can be remediated quickly due to limited resources and the remediation action applying order will significantly affect the system\u27s risk level. Thus it is important to schedule which vulnerabilities should be remediated first. We will model this as a scheduling optimization problem to schedule the remediation action applying order to minimize the total risk by utilizing vulnerabilities\u27 impact and their probabilities of being exploited. Besides, an electric utility also needs to know whether vulnerabilities have already been exploited specifically in their own power system. If a vulnerability is exploited, it has to be addressed immediately. Thus, it is important to identify whether some vulnerabilities have been taken advantage of by attackers to launch attacks. Different vulnerabilities may require different identification methods. In this dissertation, we explore identifying exploited vulnerabilities by detecting and localizing false data injection attacks and give a case study in the Automatic Generation Control (AGC) system, which is a key control system to keep the power system\u27s balance. However, malicious measurements can be injected to exploited devices to mislead AGC to make false power generation adjustment which will harm power system operations. We propose Long Short Term Memory (LSTM) Neural Network-based methods and a Fourier Transform-based method to detect and localize such false data injection attacks. Detection and localization of such attacks could provide further information to better prioritize vulnerability remediation actions

    Security risk assessment in cloud computing domains

    Get PDF
    Cyber security is one of the primary concerns persistent across any computing platform. While addressing the apprehensions about security risks, an infinite amount of resources cannot be invested in mitigation measures since organizations operate under budgetary constraints. Therefore the task of performing security risk assessment is imperative to designing optimal mitigation measures, as it provides insight about the strengths and weaknesses of different assets affiliated to a computing platform. The objective of the research presented in this dissertation is to improve upon existing risk assessment frameworks and guidelines associated to different key assets of Cloud computing domains - infrastructure, applications, and users. The dissertation presents various informal approaches of performing security risk assessment which will help to identify the security risks confronted by the aforementioned assets, and utilize the results to carry out the required cost-benefit tradeoff analyses. This will be beneficial to organizations by aiding them in better comprehending the security risks their assets are exposed to and thereafter secure them by designing cost-optimal mitigation measures --Abstract, page iv

    Measuring network security using Bayesian Network-based attack graphs

    Get PDF
    Given the increasing dependence of our societies on networked information systems, the overall security of such systems should be measured and improved. Recent research has explored the application of attack graphs and probabilistic security metrics to address this challenge. However, such work usually shares several limitations. First, individual vulnerabilities' scores are usually assumed to be independent. This assumption will not hold in many realistic cases where exploiting a vulnerability may change the score of other vulnerabilities. Second, the evolving nature of vulnerabilities and networks has generally been ignored. The scores of individual vulnerabilities are constantly changing due to released patches and exploits, which should be taken into account in measuring network security. To address these limitations, this thesis first proposes a Bayesian Network-based attack graph model for combining scores of individual vulnerabilities into a global measurement of network security. The application of Bayesian Networks allows us to handle dependency between scores and provides a sound theoretical foundation to network security metrics. We then extend the model using Dynamic Bayesian Networks in order to reason about the patterns and trends in changing scores of vulnerabilities. Finally, we implement and evaluate the proposed models through simulation studies
    corecore