17,966 research outputs found

    A Systematic Approach to Justifying Sufficient Confidence in Software Safety Arguments

    Get PDF
    Safety arguments typically have some weaknesses. To show that the overall confidence in the safety argument is considered acceptable, it is necessary to identify the weaknesses associated with the aspects of a safety argument and supporting evidence, and manage them. Confidence arguments are built to show the existence of sufficient confidence in the developed safety arguments. In this paper, we propose an approach to systematically constructing confidence arguments and identifying the weaknesses of the software safety arguments. The proposed approach is described and illustrated with a running example

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    From Safety Cases to Security Cases

    Get PDF
    Assurance cases are widely used in the safely domain, where they pro-vide a way to justify the safety of a system and render that justification open to review. Assurance cases have not been widely used in security, but there is guid-ance available and there have been some promising experiments. There are a number of differences between safety and security which have implications for how we create security cases, but they do not appear to be insurmountable. It appears that the process of creating a security case is compatible with typical evaluation processes, and will have additional benefits in terms of training and corporate memory. In this paper we discuss some of the implications and chal-lenges of applying the practice of assurance case construction from the safety domain to the security domain

    The Last Decade in Review: Tracing the Evolution of Safety Assurance Cases through a Comprehensive Bibliometric Analysis

    Full text link
    Safety assurance is of paramount importance across various domains, including automotive, aerospace, and nuclear energy, where the reliability and acceptability of mission-critical systems are imperative. This assurance is effectively realized through the utilization of Safety Assurance Cases. The use of safety assurance cases allows for verifying the correctness of the created systems capabilities, preventing system failure. The latter may result in loss of life, severe injuries, large-scale environmental damage, property destruction, and major economic loss. Still, the emergence of complex technologies such as cyber-physical systems (CPSs), characterized by their heterogeneity, autonomy, machine learning capabilities, and the uncertainty of their operational environments poses significant challenges for safety assurance activities. Several papers have tried to propose solutions to tackle these challenges, but to the best of our knowledge, no secondary study investigates the trends, patterns, and relationships characterizing the safety case scientific literature. This makes it difficult to have a holistic view of the safety case landscape and to identify the most promising future research directions. In this paper, we, therefore, rely on state-of-the-art bibliometric tools(e.g., VosViewer) to conduct a bibliometric analysis that allows us to generate valuable insights, identify key authors and venues, and gain a birds eye view of the current state of research in the safety assurance area. By revealing knowledge gaps and highlighting potential avenues for future research, our analysis provides an essential foundation for researchers, corporate safety analysts, and regulators seeking to embrace or enhance safety practices that align with their specific needs and objectives

    A Safety Case Pattern for Model-Based Development Approach

    Get PDF
    In this paper, a safety case pattern is introduced to facilitate the presentation of a correctness argument for a system implemented using formal methods in the development process. We took advantage of our experience in constructing a safety case for the Patient Controlled Analgesic (PCA) infusion pump, to define this safety case pattern. The proposed pattern is appropriate to be instantiated within the safety cases constructed for systems that are developed by applying model-based approaches

    Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems

    Get PDF
    Machine Learnt (ML) components are now widely accepted for use in a range of applications with results that are reported to exceed, under certain conditions, human performance. The adoption of ML components in safety-related domains is restricted, however, unless sufficient assurance can be demonstrated that the use of these components does not compromise safety. In this paper, we present patterns that can be used to develop assurance arguments for demonstrating the safety of the ML components. The argument patterns provide reusable templates for the types of claims that must be made in a compelling argument. On their own, the patterns neither detail the assurance artefacts that must be generated to support the safety claims for a particular system, nor provide guidance on the activities that are required to generate these artefacts. We have therefore also developed a process for the engineering of ML components in which the assurance evidence can be generated at each stage in the ML lifecycle in order to instantiate the argument patterns and create the assurance case for ML components. The patterns and the process could help provide a practical and clear basis for a justifiable deployment of ML components in safety-related systems
    corecore