1,271 research outputs found

    An Investigation of Proposed Techniques for Quantifying Confidence in Assurance Arguments

    Get PDF
    The use of safety cases in certification raises the question of assurance argument sufficiency and the issue of confidence (or uncertainty) in the argument's claims. Some researchers propose to model confidence quantitatively and to calculate confidence in argument conclusions. We know of little evidence to suggest that any proposed technique would deliver trustworthy results when implemented by system safety practitioners. Proponents do not usually assess the efficacy of their techniques through controlled experiment or historical study. Instead, they present an illustrative example where the calculation delivers a plausible result. In this paper, we review current proposals, claims made about them, and evidence advanced in favor of them. We then show that proposed techniques can deliver implausible results in some cases. We conclude that quantitative confidence techniques require further validation before they should be recommended as part of the basis for deciding whether an assurance argument justifies fielding a critical system

    Uncertainty and Confidence in Safety Logic

    Get PDF
    Abstract Reasoning about system safety requires reasoning about confidence in safety claims. For example, DO-178B requires developers to determine the correctness of the worst-case execution time of the software. It is not possible to do this beyond any doubt. Therefore, developers and assessors must consider the limitations of execution time evidence and their effect on the confidence that can be placed in execution time figures, timing analysis results, and claims to have met timing-related software safety requirements. In this paper, we survey and assess five existing concepts that might serve as means of describing and reasoning about confidence: safety integrity levels, probability distributions of failure rates, Bayesian Belief Networks, argument integrity levels, and Baconian probability. We define use cases for confidence in safety cases, prescriptive standards, certification of component-based systems, and the reuse of safety elements both in and out of context. From these use cases, we derive requirements for a confidence framework. We assess existing techniques by discussing what is known about how well each confidence metric meets these requirements. Our results show that no existing confidence metric is ideally suited for all uses. We conclude by discussing implications for future standards and for reuse of safety elements

    Guiding Complex Design Optimisation Using Bayesian Networks

    Get PDF

    Evolution of security engineering artifacts: a state of the art survey

    Get PDF
    Security is an important quality aspect of modern open software systems. However, it is challenging to keep such systems secure because of evolution. Security evolution can only be managed adequately if it is considered for all artifacts throughout the software development lifecycle. This article provides state of the art on the evolution of security engineering artifacts. The article covers the state of the art on evolution of security requirements, security architectures, secure code, security tests, security models, and security risks as well as security monitoring. For each of these artifacts the authors give an overview of evolution and security aspects and discuss the state of the art on its security evolution in detail. Based on this comprehensive survey, they summarize key issues and discuss directions of future research

    Generation of model-based safety arguments from automatically allocated safety integrity levels

    Get PDF
    To certify safety-critical systems, assurance arguments linking evidence of safety to appropriate requirements must be constructed. However, modern safety-critical systems feature increasing complexity and integration, which render manual approaches impractical to apply. This thesis addresses this problem by introducing a model-based method, with an exemplary application based on the aerospace domain.Previous work has partially addressed this problem for slightly different applications, including verification-based, COTS, product-line and process-based assurance. Each of the approaches is applicable to a specialised case and does not deliver a solution applicable to a generic system in a top-down process. This thesis argues that such a solution is feasible and can be achieved based on the automatic allocation of safety requirements onto a system’s architecture. This automatic allocation is a recent development which combines model-based safety analysis and optimisation techniques. The proposed approach emphasises the use of model-based safety analysis, such as HiP-HOPS, to maximise the benefits towards the system development lifecycle.The thesis investigates the background and earlier work regarding construction of safety arguments, safety requirements allocation and optimisation. A method for addressing the problem of optimal safety requirements allocation is first introduced, using the Tabu Search optimisation metaheuristic. The method delivers satisfactory results that are further exploited for construction of safety arguments. Using the produced requirements allocation, an instantiation algorithm is applied onto a generic safety argument pattern, which is compliant with standards, to automatically construct an argument establishing a claim that a system’s safety requirements have been met. This argument is hierarchically decomposed and shows how system and subsystem safety requirements are satisfied by architectures and analyses at low levels of decomposition. Evaluation on two abstract case studies demonstrates the feasibility and scalability of the method and indicates good performance of the algorithms proposed. Limitations and potential areas of further investigation are identified

    DAG-Based Attack and Defense Modeling: Don't Miss the Forest for the Attack Trees

    Full text link
    This paper presents the current state of the art on attack and defense modeling approaches that are based on directed acyclic graphs (DAGs). DAGs allow for a hierarchical decomposition of complex scenarios into simple, easily understandable and quantifiable actions. Methods based on threat trees and Bayesian networks are two well-known approaches to security modeling. However there exist more than 30 DAG-based methodologies, each having different features and goals. The objective of this survey is to present a complete overview of graphical attack and defense modeling techniques based on DAGs. This consists of summarizing the existing methodologies, comparing their features and proposing a taxonomy of the described formalisms. This article also supports the selection of an adequate modeling technique depending on user requirements

    Considerations in Assuring Safety of Increasingly Autonomous Systems

    Get PDF
    Recent technological advances have accelerated the development and application of increasingly autonomous (IA) systems in civil and military aviation. IA systems can provide automation of complex mission tasks-ranging across reduced crew operations, air-traffic management, and unmanned, autonomous aircraft-with most applications calling for collaboration and teaming among humans and IA agents. IA systems are expected to provide benefits in terms of safety, reliability, efficiency, affordability, and previously unattainable mission capability. There is also a potential for improving safety by removal of human errors. There are, however, several challenges in the safety assurance of these systems due to the highly adaptive and non-deterministic behavior of these systems, and vulnerabilities due to potential divergence of airplane state awareness between the IA system and humans. These systems must deal with external sensors and actuators, and they must respond in time commensurate with the activities of the system in its environment. One of the main challenges is that safety assurance, currently relying upon authority transfer from an autonomous function to a human to mitigate safety concerns, will need to address their mitigation by automation in a collaborative dynamic context. These challenges have a fundamental, multidimensional impact on the safety assurance methods, system architecture, and V&V capabilities to be employed. The goal of this report is to identify relevant issues to be addressed in these areas, the potential gaps in the current safety assurance techniques, and critical questions that would need to be answered to assure safety of IA systems. We focus on a scenario of reduced crew operation when an IA system is employed which reduces, changes or eliminates a human's role in transition from two-pilot operations

    A Decentralized Architecture for Active Sensor Networks

    Get PDF
    This thesis is concerned with the Distributed Information Gathering (DIG) problem in which a Sensor Network is tasked with building a common representation of environment. The problem is motivated by the advantages offered by distributed autonomous sensing systems and the challenges they present. The focus of this study is on Macro Sensor Networks, characterized by platform mobility, heterogeneous teams, and long mission duration. The system under consideration may consist of an arbitrary number of mobile autonomous robots, stationary sensor platforms, and human operators, all linked in a network. This work describes a comprehensive framework called Active Sensor Network (ASN) which addresses the tasks of information fusion, decistion making, system configuration, and user interaction. The main design objectives are scalability with the number of robotic platforms, maximum flexibility in implementation and deployment, and robustness to component and communication failure. The framework is described from three complementary points of view: architecture, algorithms, and implementation. The main contribution of this thesis is the development of the ASN architecture. Its design follows three guiding principles: decentralization, modularity, and locality of interactions. These principles are applied to all aspects of the architecture and the framework in general. To achieve flexibility, the design approach emphasizes interactions between components rather than the definition of the components themselves. The architecture specifies a small set of interfaces sufficient to implement a wide range of information gathering systems. In the area of algorithms, this thesis builds on the earlier work on Decentralized Data Fusion (DDF) and its extension to information-theoretic decistion making. It presents the Bayesian Decentralized Data Fusion (BDDF) algorithm formulated for environment features represented by a general probability density function. Several specific representations are also considered: Gaussian, discrete, and the Certainty Grid map. Well known algorithms for these representations are shown to implement various aspects of the Bayesian framework. As part of the ASN implementation, a practical indoor sensor network has been developed and tested. Two series of experiments were conducted, utilizing two types of environment representation: 1) point features with Gaussian position uncertainty and 2) Certainty Grid maps. The network was operational for several days at a time, with individual platforms coming on and off-line. On several occasions, the network consisted of 39 software components. The lessons learned during the system's development may be applicable to other heterogeneous distributed systems with data-intensive algorithms
    • …
    corecore