8 research outputs found
Is current incremental safety assurance sound ?
International audienceIncremental design is an essential part of engineering. Without it, engineering would not likely be an economic, nor an effective, aid to economic progress. Further, engineering relies on this view of incrementality to retain the reliability attributes of the engineering method. When considering the assurance of safety for such artifacts, it is not surprising that the same economic and reliability arguments are deployed to justify an incremental approach to safety assurance. In a sense, it is possible to argue that, with engineering artifacts becoming more and more complex, it would be economically disastrous to not “do” safety incrementally. Indeed, many enterprises use such an incremental approach, reusing safety artifacts when assuring incremental design changes. In this work, we make some observations about the inadequacy of this trend and suggest that safety practices must be rethought if incremental safety approaches are ever going to be fit for purpose. We present some examples to justify our position and comment on what a more adequate approach to incremental safety assurance may look like
A PRISMA-driven systematic mapping study on system assurance weakeners
Context: An assurance case is a structured hierarchy of claims aiming at
demonstrating that a given mission-critical system supports specific
requirements (e.g., safety, security, privacy). The presence of assurance
weakeners (i.e., assurance deficits, logical fallacies) in assurance cases
reflects insufficient evidence, knowledge, or gaps in reasoning. These
weakeners can undermine confidence in assurance arguments, potentially
hindering the verification of mission-critical system capabilities.
Objectives: As a stepping stone for future research on assurance weakeners,
we aim to initiate the first comprehensive systematic mapping study on this
subject. Methods: We followed the well-established PRISMA 2020 and SEGRESS
guidelines to conduct our systematic mapping study. We searched for primary
studies in five digital libraries and focused on the 2012-2023 publication year
range. Our selection criteria focused on studies addressing assurance weakeners
at the modeling level, resulting in the inclusion of 39 primary studies in our
systematic review.
Results: Our systematic mapping study reports a taxonomy (map) that provides
a uniform categorization of assurance weakeners and approaches proposed to
manage them at the modeling level.
Conclusion: Our study findings suggest that the SACM (Structured Assurance
Case Metamodel) -- a standard specified by the OMG (Object Management Group) --
may be the best specification to capture structured arguments and reason about
their potential assurance weakeners
Is current incremental safety assurance sound ?
Incremental design is an essential part of engineering. Without it, engineering would not likely be an economic, nor an effective, aid to economic progress. Further, engineering relies on this view of incrementality to retain the reliability attributes of the engineering method. When considering the assurance of safety for such artifacts, it is not surprising that the same economic and reliability arguments are deployed to justify an incremental approach to safety assurance. In a sense, it is possible to argue that, with engineering artifacts becoming more and more complex, it would be economically disastrous to not “do” safety incrementally. Indeed, many enterprises use such an incremental approach, reusing safety artifacts when assuring incremental design changes. In this work, we make some observations about the inadequacy of this trend and suggest that safety practices must be rethought if incremental safety approaches are ever going to be fit for purpose. We present some examples to justify our position and comment on what a more adequate approach to incremental safety assurance may look like
Recommended from our members
Quantitative Resilience Assessment of Critical Infrastructures using High-Performance Simulations
Accessing the resilience of large cyber-physical systems (LCPS) is essential for ensuring the continuity of operations and minimising the impact of disruptions caused by natural disasters, cyberattacks, and other stressful events. Recent empirical studies of LCPS have demonstrated the usefulness of modelling and simulation in assessing properties that emerge from component interactions, including resilience. However, the sheer complexity of CIs poses challenges for modellers:
1) Resilience assessment requires high-fidelity models that include a probabilistic model of the system and adverse events of interest, such as accidental failures or malicious activities, and a physics simulation model of LCPS processes, such as power/liquid/gas flows.
2) Assessing resilience with high statistical significance requires a systematic exploration of the space of possible adverse events and recovery from their effects. Exploring this space requires a significant amount of effort.
This work offers solutions intended to help modellers overcome these difficulties by using the recent advances in modelling LCPSs and high-performance computing:
i) It offers a new modelling methodology for building agent-based hybrid hierarchical stochastic models using a new domain-specific language. The new modelling approach allows easy integration of a) a variety of modelling formalisms used to model cyber-attacks on CI/LCPS; and b) a set of deterministic models, as needed by the chosen level of fidelity and specific for the modelled CI. However, the deterministic models are not the focus of this work. Such models are assumed to exist in software available from third-party vendors.
ii) It presents a set of tools to support this methodology: the visual modeller and an extensible Monte Carlo simulation engine designed to utilise high-performance and cloud computing capabilities. The engine and the editor utilise modern development practices and technologies to provide a state-of-the-art solution.
This thesis provides a survey of the relevant literature, summarises the progress with the modelling methodology, and presents the results published to date with case studies based on an extended Nordic32, a reference architecture of a power transmission network with the SCADA subsystem. The studies explore the effects caused by adversaries targeting IT infrastructure and demonstrate the application of a defence-in-depth approach to reduce the effects of these attacks
A case study of agile software development for large-scale safety-critical systems projects
This study explores the introduction of agile software development within an avionics company engaged in safety-critical system engineering. There is increasing pressure throughout the software industry for development efforts to adopt agile software development in order to respond more rapidly to changing requirements and make more frequent deliveries of systems to customers for review and integration. This pressure is also being experienced in safety-critical industries, where release cycles on typically large and complex systems may run to several years on projects spanning decades. However, safety-critical system developments are normally highly regulated, which may constrain the adoption of agile software development or require adaptation of selected methods or practices. To investigate this potential conflict, we conducted a series of interviews with practitioners in the company, exploring their experiences of adopting agile software development and the challenges encountered. The study also explores the opportunities for altering the existing software process in the company to better fit agile software development to the constraints of software development for safety-critical systems. We conclude by identifying immediate future research directions to better align the tempo of software development for safety-critical systems and agile software development
Recommended from our members
Preliminary Interdependency Analysis: An Approach to Support Critical Infrastructure Risk Assessment
We present a methodology, Preliminary Interdependency Analysis (PIA), for analysing interdependencies between critical infrastructure (CI). Consisting of two phases – qualitative analysis followed by quantitative analysis – an application of PIA progresses from a relatively quick elicitation of CI-interdependencies to the building of representative CI models, and the subsequent estimation of any resilience, risk or criticality measures an assessor might be interested in. By design, stages in the methodology are both flexible and iterative, resulting in interacting CI models that are scalable and may vary significantly in complexity and fidelity, depending on the needs and requirements of an assessor. For model parameterisation, one relies on a combination of field data, sensitivity analysis and expert judgement. Facilitated by dedicated software tool support, we illustrate PIA by applying it to a complex case-study of interacting Power (distribution and transmission) and Telecommunications networks in the Rome area. A number of studies are carried out, including: 1) an investigation of how “strength of dependence” between the CIs’ components affects various measures of risk and uncertainty, 2) for resource allocation, an exploration of different, but related, notions of CI component importance, and 3) highlighting the impact of model fidelity on the estimated risk of cascades
On the real world practice of Behaviour Driven Development
Surveys of industry practice over the last decade suggest that Behaviour Driven Development is a popular Agile practice. For example, 19% of respondents to the 14th State of Agile annual survey reported using BDD, placing it in the top 13 practices reported. As well as potential benefits, the adoption of BDD necessarily involves an additional cost of writing and maintaining Gherkin features and scenarios, and (if used for acceptance testing,) the associated step functions. Yet there is a lack of published literature exploring how BDD is used in practice and the challenges experienced by real world software development efforts. This gap is significant because without understanding current real world practice, it is hard to identify opportunities to address and mitigate challenges. In order to address this research gap concerning the challenges of using BDD, this thesis reports on a research project which explored: (a) the challenges of applying agile and undertaking requirements engineering in a real world context; (b) the challenges of applying BDD specifically and (c) the application of BDD in open-source projects to understand challenges in this different context.
For this purpose, we progressively conducted two case studies, two series of interviews, four iterations of action research, and an empirical study. The first case study was conducted in an avionics company to discover the challenges of using an agile process in a large scale safety critical project environment. Since requirements management was found to be one of the biggest challenges during the case study, we decided to investigate BDD because of its reputation for requirements management. The second case study was conducted in the company with an aim to discover the challenges of using BDD in real life. The case study was complemented with an empirical study of the practice of BDD in open source projects, taking a study sample from the GitHub open source collaboration site.
As a result of this Ph.D research, we were able to discover: (i) challenges of using an agile process in a large scale safety-critical organisation, (ii) current state of BDD in practice, (iii) technical limitations of Gherkin (i.e., the language for writing requirements in BDD), (iv) challenges of using BDD in a real project, (v) bad smells in the Gherkin specifications of open source projects on GitHub. We also presented a brief comparison between the theoretical description of BDD and BDD in practice. This research, therefore, presents the results of lessons learned from BDD in practice, and serves as a guide for software practitioners planning on using BDD in their projects
Assessing and Improving Industrial Software Processes
Software process is a complex phenomenon that involves a multitude of different artifacts, human actors with different roles, activities to be performed in order to produce a software product. Even though the research community is devoting a great effort in proposing solutions aimed at improving software process, several issues are still open. In this Thesis work I propose different solutions for assessing and improving software processes carried out in real industrial contexts. More in detail, I proposed a solution, based on ALM and MDE, for supporting Gap Analysis processes for assessing if a software process is carried out in accordance with Standards or Evaluation Framework. Then, I focused on a solution based on tool integration for the management of trace links among the artifacts involved in the software process. As another contribution, I proposed a Reverse engineering process and a tool, named EXACT, for supporting the analysis and comprehension of spreadsheet based artifacts involved in software development processes. Finally, I realized a semi-automatic approach, named AutoMative, for supporting the introduction in real Industrial software processes of SPL for managing the variability of the software products to be developed. Case studies conducted in real industrial settings showed the feasibility and the positive impact of the proposed solutions on real industrial software processes