29 research outputs found
Recommended from our members
Conservative reasoning about epistemic uncertainty for the probability of failure on demand of a 1-out-of-2 software-based system in which one channel is “possibly perfect”
In earlier work, (Littlewood and Rushby 2012) (henceforth LR), an analysis was presented of a 1-out-of-2 software-based system in which one channel was “possibly perfect”. It was shown that, at the aleatory level, the system pfd (probability of failure on demand) could be bounded above by the product of the pfd of channel A and the pnp (probability of non-perfection) of channel B. This result was presented as a way of avoiding the well-known difficulty that for two certainly-fallible channels, failures of the two will be dependent, i.e. the system pfd cannot be expressed simply as a product of the channel pfds. A price paid in this new approach for avoiding the issue of failure dependence is that the result is conservative. Furthermore, a complete analysis requires that account be taken of epistemic uncertainty – here concerning the numeric values of the two parameters pfdA and pnpB. Unfortunately this introduces a different difficult problem of dependence: estimating the dependence between an assessor’s beliefs about the parameters. The work reported here avoids this problem by obtaining results that require only an assessor’s marginal beliefs about the individual channels, i.e. they do not require knowledge of the dependence between these beliefs. The price paid is further conservatism in the results
Recommended from our members
Conservative bounds for the pfd of a 1-out-of-2 software-based system based on an assessor’s subjective probability of “not worse than independence”
We consider the problem of assessing the reliability of a 1-out-of-2 software-based system, in which failures of the two channels cannot be assumed to be independent with certainty. An informal approach to this problem assesses the channel pfds (probabilities of failure on demand) conservatively and then multiplies these together in the hope that the conservatism will be sufficient to overcome any possible dependence between the channel failures. Our intention here is to place this kind of reasoning on a formal footing. We introduce a notion of “not worse than independence” and assume that an assessor has a prior belief about this, expressed as a probability. We obtain a conservative prior system pfd, and show how a conservative posterior system pfd can be obtained following the observation of a number of demands without system failure. We present some illustrative numerical examples, discuss some of the difficulties involved in this way of reasoning, and suggest some avenues of future research
Recommended from our members
Conservative bounds for the pfd of a 1-out-of-2 software-based system based on an assessor’s subjective probability of "not worse than independence"
We consider the problem of assessing the reliability of a 1-out-of-2 software-based system, in which failures of the two channels cannot be assumed to be independent with certainty. An informal approach to this problem assesses the channel pfds (probabilities of failure on demand) conservatively and then multiplies these together in the hope that the conservatism will be sufficient to overcome any possible dependence between the channel failures. Our intention here is to place this kind of reasoning on a formal footing. We introduce a notion of “not worse than independence” and assume that an assessor has a prior belief about this, expressed as a probability. We obtain a conservative prior system pfd, and show how a conservative posterior system pfd can be obtained following the observation of a number of demands without system failure. We present some illustrative numerical examples, discuss some of the difficulties involved in this way of reasoning, and suggest some avenues of future research
Recommended from our members
Conservative reasoning about epistemic uncertainty for the probability of failure on demand of a 1-out-of-2 software-based system in which one channel is "possibly perfect"
In earlier work, (Littlewood and Rushby 2011) (henceforth LR), an analysis was presented of a 1-out-of-2 system in which one channel was “possibly perfect”. It was shown that, at the aleatory level, the system pfd could be bounded above by the product of the pfd of channel A and the pnp (probability of non-perfection)of channel B. This was presented as a way of avoiding the well-known difficulty that for two certainly-fallible channels, system pfd cannot be expressed simply as a function of the channel pfds, and in particular not as a product of these. One price paid in this new approach is that the result is conservative – perhaps greatly so. Furthermore, a complete analysis requires that account be taken of epistemic uncertainty – here concerning the numeric values of the two parameters pfdA and pnpB. This introduces some difficulties, particularly concerning the estimation of dependence between an assessor’s beliefs about the parameters. The work reported here avoids these difficulties by obtaining results that require only an assessor’s marginal beliefs about the individual channels, i.e. they do not require knowledge of the dependence between these belief
Human-machine diversity in the use of computerised advisory systems: a case study
Computer-based advisory systems form with their users composite, human-machine systems. Redundancy and diversity between the human and the machine are often important for the dependability of such systems. We discuss the modelling approach we applied in a case study. The goal is to assess failure probabilities for the analysis of X-ray films for detecting cancer, performed by a person assisted by a computer-based tool. Differently from most approaches to human reliability assessment, we focus on the effects of failure diversity — or correlation — between humans and machines. We illustrate some of the modelling and prediction problems, especially those caused by the presence of the human component. We show two alternative models, with their pros and cons, and illustrate, via numerical examples and analytically, some interesting and non-intuitive answers to questions about reliability assessment and design choices for human-computer systems
Recommended from our members
How to discriminate between computer-aided and computer-hindered decisions: a case study in mammography
Background. Computer aids can affect decisions in complex ways, potentially even making them worse; common assessment methods may miss these effects. We developed a method for estimating the quality of decisions, as well as how computer aids affect it, and applied it to computer-aided detection (CAD) of cancer, reanalyzing data from a published study where 50 professionals (“readers”) interpreted 180 mammograms, both with and without computer support.
Method. We used stepwise regression to estimate how CAD affected the probability of a reader making a correct screening decision on a patient with cancer (sensitivity), thereby taking into account the effects of the difficulty of the cancer (proportion of readers who missed it) and the reader’s discriminating ability (Youden’s determinant). Using regression estimates, we obtained thresholds for classifying a posteriori the cases (by difficulty) and the readers (by discriminating ability).
Results. Use of CAD was associated with a 0.016 increase in sensitivity (95% confidence interval [CI], 0.003–0.028) for the 44 least discriminating radiologists for 45 relatively easy, mostly CAD-detected cancers. However, for the 6 most discriminating radiologists, with CAD, sensitivity decreased by 0.145 (95% CI, 0.034–0.257) for the 15 relatively difficult cancers.
Conclusions. Our exploratory analysis method reveals unexpected effects. It indicates that, despite the original study detecting no significant average effect, CAD helped the less discriminating readers but hindered the more discriminating readers. Such differential effects, although subtle, may be clinically significant and important for improving both computer algorithms and protocols for their use. They should be assessed when evaluating CAD and similar warning systems
Recommended from our members
Comparison of Empirical Data from Two Honeynets and a Distributed Honeypot Network
In this paper we present empirical results and speculative analysis based on observations collected over a two month period from studies with two high interaction honeynets, deployed in a corporate and an SME (small to medium enterprise) environment, and a distributed honeypots deployment. All three networks contain a mixture of Windows and Linux hosts. We detail the architecture of the deployment and results of comparing the observations from the three environments. We analyze in detail the times between attacks on different hosts, operating systems, networks or geographical location. Even though results from honeynet deployments are reported often in the literature, this paper provides novel results analyzing traffic from three different types of networks and some initial exploratory models. This research aims to contribute to endeavours in the wider security research community to build methods, grounded on strong empirical work, for assessment of the robustness of computer-based systems in hostile environments
Recommended from our members
Diversity, Safety and Security in Embedded Systems: modelling adversary effort and supply chain risks
We present quantitative considerations for the design of redundancy and diversity in embedded systems with security requirements. The potential for malicious activity against these systems have complicated requirements and design choices. New design trade-offs have arisen besides those already familiar in this area: for instance, adding redundancy may increase the attack surface of a system and thus increase overall risk. Our case study concerns protecting redundant communications between a control system and its controlled physical system. We study the effects of using: (i) different encryption keys on replicated channels, and (ii) diverse encryption schemes and implementations. We consider two attack scenarios, with adversaries having access to (i) ways of reducing the search space in attacks using random searches for keys; or (ii) hidden major flaws in some crypto algorithm or implementation. Trade-offs between the requirements of integrity and confidentiality are found, but not in all cases. Simple models give useful design insights. In this system, we find that key diversity improves integrity without impairing confidentiality – no trade-offs arise between the two – and it can substantially increase adversary effort, but it will not remedy substantial weaknesses of the crypto system. Implementation diversity does involve design trade-offs between integrity and confidentiality, which we analyse, but turns out to be generally desirable for highly critical applications of the control system considered
Recommended from our members
Toward a Formalism for Conservative Claims about the Dependability of Software-Based Systems
In recent work, we have argued for a formal treatment of confidence about the claims made in dependability cases for software-based systems. The key idea underlying this work is "the inevitability of uncertainty": It is rarely possible to assert that a claim about safety or reliability is true with certainty. Much of this uncertainty is epistemic in nature, so it seems inevitable that expert judgment will continue to play an important role in dependability cases. Here, we consider a simple case where an expert makes a claim about the probability of failure on demand (pfd) of a subsystem of a wider system and is able to express his confidence about that claim probabilistically. An important, but difficult, problem then is how such subsystem (claim, confidence) pairs can be propagated through a dependability case for a wider system, of which the subsystems are components. An informal way forward is to justify, at high confidence, a strong claim, and then, conservatively, only claim something much weaker: "I'm 99 percent confident that the pfd is less than 10-5, so it's reasonable to be 100 percent confident that it is less than 10-3." These conservative pfds of subsystems can then be propagated simply through the dependability case of the wider system. In this paper, we provide formal support for such reasoning
Recommended from our members