3 research outputs found
Understanding and Evaluating Assurance Cases
Assurance cases are a method for providing assurance for a system by giving an argument to justify a claim about the system, based on evidence about its design, development, and tested behavior. In comparison with assurance based on guidelines or standards (which essentially specify only the evidence to be produced), the chief novelty in assurance cases is provision of an explicit argument. In principle, this can allow assurance cases to be more finely tuned to the specific circumstances of the system, and more agile than guidelines in adapting to new techniques and applications. The first part of this report (Sections 1-4) provides an introduction to assurance cases. Although this material should be accessible to all those with an interest in these topics, the examples focus on software for airborne systems, traditionally assured using the DO-178C guidelines and its predecessors. A brief survey of some existing assurance cases is provided in Section 5. The second part (Section 6) considers the criteria, methods, and tools that may be used to evaluate whether an assurance case provides sufficient confidence that a particular system or service is fit for its intended use. An assurance case cannot provide unequivocal "proof" for its claim, so much of the discussion focuses on the interpretation of such less-than-definitive arguments, and on methods to counteract confirmation bias and other fallibilities in human reasoning
Recommended from our members
Assessing Confidence with Assurance 2.0
An assurance case is intended to provide justifiable confidence in the truth of its top claim, which typically concerns safety or security. A natural question is then "how much" confidence does the case provide?
In this report, we explore issues in assessing confidence for assurance cases developed using the rigorous approach we call Assurance 2.0. We argue that confidence cannot be reduced to a single attribute or measurement. Instead, we suggest it should be based on attributes that draw on three different perspectives: positive, negative, and residual doubts.
Positive Perspectives consider the extent to which the evidence and overall argument of the case combine to make a positive statement justifying belief in its claims. We set a high bar for justification, requiring it to be indefeasible. The primary positive measure for this is soundness, which interprets the argument as a logical proof and delivers a yes/no measurement. The interior steps of an Assurance 2.0 case can be evaluated as logical axioms, but the evidential steps at the leaves derive logical claims epistemically---from observations or measurements about the system and its environment---and must be treated as premises. Confidence in these can be expressed probabilistically and we use confirmation measures to ensure that the probabilistic "weight" of evidence crosses some threshold.
In addition, probabilities can be aggregated from evidence through the steps of the argument using probability logics to yield what we call probabilistic valuations for the claims (in contrast to soundness, which is a logical valuation). The aggregated probability attached to the top claim can be interpreted as a numerical measure of confidence. We apply probabilistic valuations only to sound cases, and this avoids some of the difficulties that attend probabilistic methods that stand alone. The primary uses for probabilistic valuations are with less critical systems, where we trade assurance effort against confidence, and in assessing residual doubts.
Negative Perspectives record doubts and challenges to the case, typically expressed as defeaters, and their exploration and resolution. Assurance developers must guard against confirmation bias and should vigorously explore potential defeaters as they develop the case, and should record them and their resolution to avoid rework and to aid reviewers.
Residual Doubts: the world is uncertain so not all potential defeaters can be resolved. For example, we may design a system to tolerate two faults and have good reasons and evidence to suppose that is sufficient to cover the exposure on any expected mission. But doubts remain: what if more than two faults do arrive? Here we can explore consequences and likelihoods and thereby assess risk (their product). Some of these residual risks may be unacceptable and thereby prompt a review, but others may be considered acceptable or unavoidable. It is crucial however that these judgments are conscious ones and that they are recorded in the assurance case.
This report examines each of these three perspectives in detail and indicates how Clarissa, our prototype toolset for Assurance 2.0, assists in their evaluation