14 research outputs found
A PRISMA-driven systematic mapping study on system assurance weakeners
Context: An assurance case is a structured hierarchy of claims aiming at
demonstrating that a given mission-critical system supports specific
requirements (e.g., safety, security, privacy). The presence of assurance
weakeners (i.e., assurance deficits, logical fallacies) in assurance cases
reflects insufficient evidence, knowledge, or gaps in reasoning. These
weakeners can undermine confidence in assurance arguments, potentially
hindering the verification of mission-critical system capabilities.
Objectives: As a stepping stone for future research on assurance weakeners,
we aim to initiate the first comprehensive systematic mapping study on this
subject. Methods: We followed the well-established PRISMA 2020 and SEGRESS
guidelines to conduct our systematic mapping study. We searched for primary
studies in five digital libraries and focused on the 2012-2023 publication year
range. Our selection criteria focused on studies addressing assurance weakeners
at the modeling level, resulting in the inclusion of 39 primary studies in our
systematic review.
Results: Our systematic mapping study reports a taxonomy (map) that provides
a uniform categorization of assurance weakeners and approaches proposed to
manage them at the modeling level.
Conclusion: Our study findings suggest that the SACM (Structured Assurance
Case Metamodel) -- a standard specified by the OMG (Object Management Group) --
may be the best specification to capture structured arguments and reason about
their potential assurance weakeners
A case study of agile software development for large-scale safety-critical systems projects
This study explores the introduction of agile software development within an avionics company engaged in safety-critical system engineering. There is increasing pressure throughout the software industry for development efforts to adopt agile software development in order to respond more rapidly to changing requirements and make more frequent deliveries of systems to customers for review and integration. This pressure is also being experienced in safety-critical industries, where release cycles on typically large and complex systems may run to several years on projects spanning decades. However, safety-critical system developments are normally highly regulated, which may constrain the adoption of agile software development or require adaptation of selected methods or practices. To investigate this potential conflict, we conducted a series of interviews with practitioners in the company, exploring their experiences of adopting agile software development and the challenges encountered. The study also explores the opportunities for altering the existing software process in the company to better fit agile software development to the constraints of software development for safety-critical systems. We conclude by identifying immediate future research directions to better align the tempo of software development for safety-critical systems and agile software development
Quantitative dependability and interdependency models for large-scale cyber-physical systems
Cyber-physical systems link cyber infrastructure with physical processes through an integrated network of physical components, sensors, actuators, and computers that are interconnected by communication links. Modern critical infrastructures such as smart grids, intelligent water distribution networks, and intelligent transportation systems are prominent examples of cyber-physical systems. Developed countries are entirely reliant on these critical infrastructures, hence the need for rigorous assessment of the trustworthiness of these systems. The objective of this research is quantitative modeling of dependability attributes -- including reliability and survivability -- of cyber-physical systems, with domain-specific case studies on smart grids and intelligent water distribution networks. To this end, we make the following research contributions: i) quantifying, in terms of loss of reliability and survivability, the effect of introducing computing and communication technologies; and ii) identifying and quantifying interdependencies in cyber-physical systems and investigating their effect on fault propagation paths and degradation of dependability attributes.
Our proposed approach relies on observation of system behavior in response to disruptive events. We utilize a Markovian technique to formalize a unified reliability model. For survivability evaluation, we capture temporal changes to a service index chosen to represent the extent of functionality retained. In modeling of interdependency, we apply correlation and causation analyses to identify links and use graph-theoretical metrics for quantifying them. The metrics and models we propose can be instrumental in guiding investments in fortification of and failure mitigation for critical infrastructures. To verify the success of our proposed approach in meeting these goals, we introduce a failure prediction tool capable of identifying system components that are prone to failure as a result of a specific disruptive event. Our prediction tool can enable timely preventative actions and mitigate the consequences of accidental failures and malicious attacks --Abstract, page iii
Recommended from our members
Preliminary Interdependency Analysis: An Approach to Support Critical Infrastructure Risk Assessment
We present a methodology, Preliminary Interdependency Analysis (PIA), for analysing interdependencies between critical infrastructure (CI). Consisting of two phases – qualitative analysis followed by quantitative analysis – an application of PIA progresses from a relatively quick elicitation of CI-interdependencies to the building of representative CI models, and the subsequent estimation of any resilience, risk or criticality measures an assessor might be interested in. By design, stages in the methodology are both flexible and iterative, resulting in interacting CI models that are scalable and may vary significantly in complexity and fidelity, depending on the needs and requirements of an assessor. For model parameterisation, one relies on a combination of field data, sensitivity analysis and expert judgement. Facilitated by dedicated software tool support, we illustrate PIA by applying it to a complex case-study of interacting Power (distribution and transmission) and Telecommunications networks in the Rome area. A number of studies are carried out, including: 1) an investigation of how “strength of dependence” between the CIs’ components affects various measures of risk and uncertainty, 2) for resource allocation, an exploration of different, but related, notions of CI component importance, and 3) highlighting the impact of model fidelity on the estimated risk of cascades
Recommended from our members
Quantitative Resilience Assessment of Critical Infrastructures using High-Performance Simulations
Accessing the resilience of large cyber-physical systems (LCPS) is essential for ensuring the continuity of operations and minimising the impact of disruptions caused by natural disasters, cyberattacks, and other stressful events. Recent empirical studies of LCPS have demonstrated the usefulness of modelling and simulation in assessing properties that emerge from component interactions, including resilience. However, the sheer complexity of CIs poses challenges for modellers:
1) Resilience assessment requires high-fidelity models that include a probabilistic model of the system and adverse events of interest, such as accidental failures or malicious activities, and a physics simulation model of LCPS processes, such as power/liquid/gas flows.
2) Assessing resilience with high statistical significance requires a systematic exploration of the space of possible adverse events and recovery from their effects. Exploring this space requires a significant amount of effort.
This work offers solutions intended to help modellers overcome these difficulties by using the recent advances in modelling LCPSs and high-performance computing:
i) It offers a new modelling methodology for building agent-based hybrid hierarchical stochastic models using a new domain-specific language. The new modelling approach allows easy integration of a) a variety of modelling formalisms used to model cyber-attacks on CI/LCPS; and b) a set of deterministic models, as needed by the chosen level of fidelity and specific for the modelled CI. However, the deterministic models are not the focus of this work. Such models are assumed to exist in software available from third-party vendors.
ii) It presents a set of tools to support this methodology: the visual modeller and an extensible Monte Carlo simulation engine designed to utilise high-performance and cloud computing capabilities. The engine and the editor utilise modern development practices and technologies to provide a state-of-the-art solution.
This thesis provides a survey of the relevant literature, summarises the progress with the modelling methodology, and presents the results published to date with case studies based on an extended Nordic32, a reference architecture of a power transmission network with the SCADA subsystem. The studies explore the effects caused by adversaries targeting IT infrastructure and demonstrate the application of a defence-in-depth approach to reduce the effects of these attacks
Recommended from our members
Quantitative Evaluation of the Efficacy of Defence-in-Depth in Critical Infrastructures
This chapter reports on a model-based approach to assessing cyber-risks in a cyber-physical system (CPS), such as power-transmission systems. We demonstrate that quantitative cyber-risk assessment, despite its inherent difficulties, is feasible. In this regard: i) we give experimental evidence (using Monte-Carlo simulation) showing that the losses from a specific cyber-attack type can be established accurately using an abstract model of cyber-attacks – a model constructed without taking into account the details of the specific attack used in the study; ii) we establish the benefits from deploying defence-in-depth (DiD) against failures and cyber-attacks for two types of attackers: a) an attacker unaware of the nature of DiD, and b) an attacker who knows in detail the DiD they face in a particular deployment, and launches attacks sufficient to defeat DiD. This study provides some insight into the benefits of combining design-diversity – to harden some of the protection devices in a CPS – with periodic “proactive recovery” of protection devices. The results are discussed in the context of making evidence-based decisions about maximising the benefits from DiD in a particular CPS
An improved requirement change management model for agile software development
Business requirements for software development projects are volatile and continuously need improvement. Hence, popularity of Agile methodology increases as it welcomes requirement changes during the Agile Software Development (ASD). However, existing models merely focus on change of functional requirements that are
not adequate to achieve software sustainability and support change requirement processes. Therefore, this study proposes an improved Agile Requirement Change Management (ARCM) Model which provides a better support of non-functional requirement changes in ASD for achieving software sustainability. This study was
carried out in four phases. Phase one is a theoretical study that examined the important issues and practices of requirement change in ASD. Then, in phase two, an exploratory study was conducted to investigate current practices of requirement changes in ASD. The study involved 137 software practitioners from Pakistan. While in phase three, the findings from the previous phases were used to construct the ARCM model. The model was constructed by adapting Plan-Do-Check-Act (PDCA) method which consists of four 4 stages. Every stage provides well-defined aims,
processes, activities, and practices. Finally, the model was evaluated by using expert review and case study approaches. There were six experts involved to verify the model and two case studies which involved two software companies from Pakistan
were carried out to validate the applicability of the proposed model. The study proposes the ARCM model that consists of three main components: sustainability characteristics for handling non-functional requirements, sustainability analysis method for performing impact and risk analysis and assessment mechanism of ARCM using Goal Question Metrics (GQM) method. The evaluation result shown that the ARCM Model gained software practitioners’ satisfaction and able to be executed in a real environment. From the theoretical perspective, this study introduces the ARCM Model that contributed to the field of Agile Requirement Management, as well as the empirical findings that focused on the current issues, challenges and practices of RCM. Moreover, the ARCM model provides a solution for handling the nonfunctional requirements changes in ASD. Consequently, these findings are beneficial to Agile software practitioners and researchers to ensure the software sustainability are fulfilled hence empowers the companies to improve their value delivery
Model-connected safety cases
Regulatory authorities require justification that safety-critical systems exhibit acceptable levels of safety. Safety cases are traditionally documents which allow the exchange of information between stakeholders and communicate the rationale of how safety is achieved via a clear, convincing and comprehensive argument and its supporting evidence. In the automotive and aviation industries, safety cases have a critical role in the certification process and their maintenance is required throughout a system’s lifecycle. Safety-case-based certification is typically handled manually and the increase in scale and complexity of modern systems renders it impractical and error prone.Several contemporary safety standards have adopted a safety-related framework that revolves around a concept of generic safety requirements, known as Safety Integrity Levels (SILs). Following these guidelines, safety can be justified through satisfaction of SILs. Careful examination of these standards suggests that despite the noticeable differences, there are converging aspects. This thesis elicits the common elements found in safety standards and defines a pattern for the development of safety cases for cross-sector application. It also establishes a metamodel that connects parts of the safety case with the target system architecture and model-based safety analysis methods. This enables the semi- automatic construction and maintenance of safety arguments that help mitigate problems related to manual approaches. Specifically, the proposed metamodel incorporates system modelling, failure information, model-based safety analysis and optimisation techniques to allocate requirements in the form of SILs. The system architecture and the allocated requirements along with a user-defined safety argument pattern, which describes the target argument structure, enable the instantiation algorithm to automatically generate the corresponding safety argument. The idea behind model-connected safety cases stemmed from a critical literature review on safety standards and practices related to safety cases. The thesis presents the method, and implemented framework, in detail and showcases the different phases and outcomes via a simple example. It then applies the method on a case study based on the Boeing 787’s brake system and evaluates the resulting argument against certain criteria, such as scalability. Finally, contributions compared to traditional approaches are laid out