3,809 research outputs found
Worst Case Reliability Prediction Based on a Prior Estimate of Residual Defects
In this paper we extend an earlier worst case bound reliability theory to derive a worst case reliability function R(t), which gives the worst case probability of surviving a further time t given an estimate of residual defects in the software N and a prior test time T. The earlier theory and its extension are presented and the paper also considers the case where there is a low probability of any defect existing in the program. For the "fractional defect" case, there can be a high probability of surviving any subsequent time t. The implications of the theory are discussed and compared with alternative reliability models
Recommended from our members
Using a Log-normal Failure Rate Distribution for Worst Case Bound Reliability Prediction
Prior research has suggested that the failure rates of faults follow a log normal distribution. We propose a specific model where distributions close to a log normal arise naturally from the program structure. The log normal distribution presents a problem when used in reliability growth models as it is not mathematically tractable. However we demonstrate that a worst case bound can be estimated that is less pessimistic than our earlier worst case bound theory
A Methodology for Safety Case Development
This paper will outline a safety case methodology that seeks to minimise safety risks and commercial risks by constructing a demonstrable safety case. The safety case ideas presented here were initially developed in an EU-sponsored SHIP project [1] and was then further developed in the UK Nuclear Safety Research Programme (the QUARC Project [2]). Some of these concepts have subsequently been incorporated in safety standards such as MOD Def Stan 00-55, and have also been used to establish specific safety cases for clients. A generalisation of the concepts also appears in Def Stan 00-42 Part 2, in the form of the software reliability case
Recommended from our members
Security-Informed Safety: Supporting Stakeholders with Codes of Practice
Codes of practice provide principles and guidance on how organizations can incorporate security considerations into their safety engineering lifecycle and become more security minded
Recommended from our members
Disruptive Innovations and Disruptive Assurance: Assuring Machine Learning and Autonomy
Autonomous and machine learning-based systems are disruptive innovations and thus require a corresponding disruptive assurance strategy. We offer an overview of a framework based on claims, arguments, and evidence aimed at addressing these systems and use it to identify specific gaps, challenges, and potential solutions
History and development of validation with the ESP-r simulation program
It is well recognised that validation of dynamic building simulation programs is a long-term complex task. There have been many large national and international efforts that have led to a well-established validation methodology comprising analytical, inter-program comparison and empirical validation elements, and a significant number of tests have been developed. As simulation usage increases, driven by such initiatives as the European Energy Performance of Buildings Directive, such tests are starting to be incorporated into national and international standards. Although many program developers have run many of the developed tests, there does not appear to have been a systematic attempt to incorporate such tests into routine operation of the simulation programs. This paper reports work undertaken to address this deficiency. The paper summarizes the tests that have been applied to the simulation program ESP-r. These tests have been developed within International Energy Agency Annexes, within CEN standards, within various large-scale national projects, and by the UK's Chartered Institution of Building Services Engineers. The structure used to encapsulate the tests allows developers to ensure that recent code modifications have not resulted in unforeseen impacts on program predictions, and allows users to check for themselves against benchmarks
Recommended from our members
Integrity static analysis of COTS/SOUP
This paper describes the integrity static analysis approach developed to support the justification of commercial off-the-shelf software (COTS) used in a safety-related system. The static analysis was part of an overall software qualification programme, which also included the work reported in our paper presented at Safecomp 2002. Integrity static analysis focuses on unsafe language constructs and “covert” flows, where one thread can affect the data or control flow of another thread. The analysis addressed two main aspects: the internal integrity of the code (especially for the more critical functions), and the intra-component integrity, checking for covert channels. The analysis process was supported by an aggregation of tools, combined and engineered to support the checks done and to scale as necessary. Integrity static analysis is feasible for industrial scale software, did not require unreasonable resources and we provide data that illustrates its contribution to the software qualification programme
Recommended from our members
Diverse protection systems for improving security: a study with AntiVirus engines
Diverse “barriers” or “protection systems” are very common in many industries, especially in safety-critical ones where the designers must use “defense in depth” techniques to prevent safety failures. Similar techniques are also commonly prescribed for security systems: using multiple, diverse detection systems to prevent security breaches. However empirical evidence of the effectiveness of diversity is rare. We present results of an empirical study which uses a large-scale dataset to assess the benefits of diversity with an important category of security systems: AntiVirus products. The analysis was based on 1599 malware samples collected from a distributed honeypot deployment over a period of 178 days. The malware samples were sent to the signature engines of 32 different AntiVirus products hosted by the VirusTotal service. We also present an exploratory model which shows that the number of diverse protection layers that are needed to achieve “perfect” detection with our dataset follows an exponential power-law distribution. If this distribution is shown to be generic with other datasets, it would be a cost-effective means for predicting the probability of perfect detection for systems that use a large number of barriers based on measurements made with systems that are composed of fewer (say 2, 3) barriers
- …