316,027 research outputs found

    Cyber Defense Remediation in Energy Delivery Systems

    Get PDF
    The integration of Information Technology (IT) and Operational Technology (OT) in Cyber-Physical Systems (CPS) has resulted in increased efficiency and facilitated real-time information acquisition, processing, and decision making. However, the increase in automation technology and the use of the internet for connecting, remote controlling, and supervising systems and facilities has also increased the likelihood of cybersecurity threats that can impact safety of humans and property. There is a need to assess cybersecurity risks in the power grid, nuclear plants, chemical factories, etc. to gain insight into the likelihood of safety hazards. Quantitative cybersecurity risk assessment will lead to informed cyber defense remediation and will ensure the presence of a mitigation plan to prevent safety hazards. In this dissertation, using Energy Delivery Systems (EDS) as a use case to contextualize a CPS, we address key research challenges in managing cyber risk for cyber defense remediation. First, we developed a platform for modeling and analyzing the effect of cyber threats and random system faults on EDS\u27s safety that could lead to catastrophic damages. We developed a data-driven attack graph and fault graph-based model to characterize the exploitability and impact of threats in EDS. We created an operational impact assessment to quantify the damages. Finally, we developed a strategic response decision capability that presents optimal mitigation actions and policies that balance the tradeoff between operational resilience (tactical risk) and strategic risk. Next, we addressed the challenge of management of tactical risk based on a prioritized cyber defense remediation plan. A prioritized cyber defense remediation plan is critical for effective risk management in EDS. Due to EDS\u27s complexity in terms of the heterogeneous nature of blending IT and OT and Industrial Control System (ICS), scale, and critical processes tasks, prioritized remediation should be applied gradually to protect critical assets. We proposed a methodology for prioritizing cyber risk remediation plans by detecting and evaluating critical EDS nodes\u27 paths. We conducted evaluation of critical nodes characteristics based on nodes\u27 architectural positions, measure of centrality based on nodes\u27 connectivity and frequency of network traffic, as well as the controlled amount of electrical power. The model also examines the relationship between cost models of budget allocation for removing vulnerabilities on critical nodes and their impact on gradual readiness. The proposed cost models were empirically validated in an existing network ICS test-bed computing nodes criticality. Two cost models were examined, and although varied, we concluded the lack of correlation between types of cost models to most damageable attack path and critical nodes readiness. Finally, we proposed a time-varying dynamical model for the cyber defense remediation in EDS. We utilize the stochastic evolutionary game model to simulate the dynamic adversary of cyber-attack-defense. We leveraged the Logit Quantal Response Dynamics (LQRD) model to quantify real-world players\u27 cognitive differences. We proposed the optimal decision making approach by calculating the stable evolutionary equilibrium and balancing defense costs and benefits. Case studies on EDS indicate that the proposed method can help the defender predict possible attack action, select the related optimal defense strategy over time, and gain the maximum defense payoffs. We also leveraged software-defined networking (SDN) in EDS for dynamical cyber defense remediation. We presented an approach to aid the selection security controls dynamically in an SDN-enabled EDS and achieve tradeoffs between providing security and Quality of Service (QoS). We modeled the security costs based on end-to-end packet delay and throughput. We proposed a non-dominated sorting based multi-objective optimization framework which can be implemented within an SDN controller to address the joint problem of optimizing between security and QoS parameters by alleviating time complexity at O(MN2). The M is the number of objective functions, and N is the population for each generation, respectively. We presented simulation results that illustrate how data availability and data integrity can be achieved while maintaining QoS constraints

    Service Level Agreement-based GDPR Compliance and Security assurance in (multi)Cloud-based systems

    Get PDF
    Compliance with the new European General Data Protection Regulation (Regulation (EU) 2016/679) and security assurance are currently two major challenges of Cloud-based systems. GDPR compliance implies both privacy and security mechanisms definition, enforcement and control, including evidence collection. This paper presents a novel DevOps framework aimed at supporting Cloud consumers in designing, deploying and operating (multi)Cloud systems that include the necessary privacy and security controls for ensuring transparency to end-users, third parties in service provision (if any) and law enforcement authorities. The framework relies on the risk-driven specification at design time of privacy and security level objectives in the system Service Level Agreement (SLA) and in their continuous monitoring and enforcement at runtime.The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 644429 and No 780351, MUSA project and ENACT project, respectively. We would also like to acknowledge all the members of the MUSA Consortium and ENACT Consortium for their valuable help

    Estimating Impact and Frequency of Risks to Safety and Mission Critical Systems Using CVSS

    Get PDF
    Many safety and mission critical systems depend on the correct and secure operation of both supportive and core software systems. E.g., both the safety of personnel and the effective execution of core missions on an oil platform depend on the correct recording storing, transfer and interpretation of data, such as that for the Logging While Drilling (LWD) and Measurement While Drilling (MWD) subsystems. Here, data is recorded on site, packaged and then transferred to an on-shore operational centre. Today, the data is transferred on dedicated communication channels to ensure a secure and safe transfer, free from deliberately and accidental faults. However, as the cost control is ever more important some of the transfer will be over remotely accessible infrastructure in the future. Thus, communication will be prone to known security vulnerabilities exploitable by outsiders. This paper presents a model that estimates risk level of known vulnerabilities as a combination of frequency and impact estimates derived from the Common Vulnerability Scoring System (CVSS). The model is implemented as a Bayesian Belief Network (BBN)

    Run-time risk management in adaptive ICT systems

    No full text
    We will present results of the SERSCIS project related to risk management and mitigation strategies in adaptive multi-stakeholder ICT systems. The SERSCIS approach involves using semantic threat models to support automated design-time threat identification and mitigation analysis. The focus of this paper is the use of these models at run-time for automated threat detection and diagnosis. This is based on a combination of semantic reasoning and Bayesian inference applied to run-time system monitoring data. The resulting dynamic risk management approach is compared to a conventional ISO 27000 type approach, and validation test results presented from an Airport Collaborative Decision Making (A-CDM) scenario involving data exchange between multiple airport service providers

    Estimating ToE Risk Level using CVSS

    Get PDF
    Security management is about calculated risk and requires continuous evaluation to ensure cost, time and resource effectiveness. Parts of which is to make future-oriented, cost-benefit investments in security. Security investments must adhere to healthy business principles where both security and financial aspects play an important role. Information on the current and potential risk level is essential to successfully trade-off security and financial aspects. Risk level is the combination of the frequency and impact of a potential unwanted event, often referred to as a security threat or misuse. The paper presents a risk level estimation model that derives risk level as a conditional probability over frequency and impact estimates. The frequency and impact estimates are derived from a set of attributes specified in the Common Vulnerability Scoring System (CVSS). The model works on the level of vulnerabilities (just as the CVSS) and is able to compose vulnerabilities into service levels. The service levels define the potential risk levels and are modelled as a Markov process, which are then used to predict the risk level at a particular time

    Towards an automatic data value analysis method for relational databases

    Get PDF
    Data is becoming one of the world’s most valuable resources and it is suggested that those who own the data will own the future. However, despite data being an important asset, data owners struggle to assess its value. Some recent pioneer works have led to an increased awareness of the necessity for measuring data value. They have also put forward some simple but engaging survey-based methods to help with the first-level data assessment in an organisation. However, these methods are manual and they depend on the costly input of domain experts. In this paper, we propose to extend the manual survey-based approaches with additional metrics and dimensions derived from the evolving literature on data value dimensions and tailored specifically for our use case study. We also developed an automatic, metric-based data value assessment approach that (i) automatically quantifies the business value of data in Relational Databases (RDB), and (ii) provides a scoring method that facilitates the ranking and extraction of the most valuable RDB tables. We evaluate our proposed approach on a real-world RDB database from a small online retailer (MyVolts) and show in our experimental study that the data value assessments made by our automated system match those expressed by the domain expert approach

    Architecture of Environmental Risk Modelling: for a faster and more robust response to natural disasters

    Full text link
    Demands on the disaster response capacity of the European Union are likely to increase, as the impacts of disasters continue to grow both in size and frequency. This has resulted in intensive research on issues concerning spatially-explicit information and modelling and their multiple sources of uncertainty. Geospatial support is one of the forms of assistance frequently required by emergency response centres along with hazard forecast and event management assessment. Robust modelling of natural hazards requires dynamic simulations under an array of multiple inputs from different sources. Uncertainty is associated with meteorological forecast and calibration of the model parameters. Software uncertainty also derives from the data transformation models (D-TM) needed for predicting hazard behaviour and its consequences. On the other hand, social contributions have recently been recognized as valuable in raw-data collection and mapping efforts traditionally dominated by professional organizations. Here an architecture overview is proposed for adaptive and robust modelling of natural hazards, following the Semantic Array Programming paradigm to also include the distributed array of social contributors called Citizen Sensor in a semantically-enhanced strategy for D-TM modelling. The modelling architecture proposes a multicriteria approach for assessing the array of potential impacts with qualitative rapid assessment methods based on a Partial Open Loop Feedback Control (POLFC) schema and complementing more traditional and accurate a-posteriori assessment. We discuss the computational aspect of environmental risk modelling using array-based parallel paradigms on High Performance Computing (HPC) platforms, in order for the implications of urgency to be introduced into the systems (Urgent-HPC).Comment: 12 pages, 1 figure, 1 text box, presented at the 3rd Conference of Computational Interdisciplinary Sciences (CCIS 2014), Asuncion, Paragua
    corecore