26,099 research outputs found
Recommended from our members
Modeling the effects of combining diverse software fault detection techniques
The software engineering literature contains many studies of the efficacy of fault finding techniques. Few of these, however, consider what happens when several different techniques are used together. We show that the effectiveness of such multitechnique approaches depends upon quite subtle interplay between their individual efficacies and dependence between them. The modelling tool we use to study this problem is closely related to earlier work on software design diversity. The earliest of these results showed that, under quite plausible assumptions, it would be unreasonable even to expect software versions that were developed ‘truly independently’ to fail independently of one another. The key idea here was a ‘difficulty function’ over the input space. Later work extended these ideas to introduce a notion of ‘forced’ diversity, in which it became possible to obtain system failure behaviour better even than could be expected if the versions failed independently. In this paper we show that many of these results for design diversity have counterparts in diverse fault detection in a single software version. We define measures of fault finding effectiveness, and of diversity, and show how these might be used to give guidance for the optimal application of different fault finding procedures to a particular program. We show that the effects upon reliability of repeated applications of a particular fault finding procedure are not statistically independent - in fact such an incorrect assumption of independence will always give results that are too optimistic. For diverse fault finding procedures, on the other hand, things are different: here it is possible for effectiveness to be even greater than it would be under an assumption of statistical independence. We show that diversity of fault finding procedures is, in a precisely defined way, ‘a good thing’, and should be applied as widely as possible. The new model and its results are illustrated using some data from an experimental investigation into diverse fault finding on a railway signalling application
Recommended from our members
Software safety : a definition and some preliminary thoughts
Software safety is the subject of a research project in its initial stages at the University of California Irvine. This research deals with critical real-time software where the cost of an error is high, e.g. human life. In this paper software techniques having a bearing on safety are described and evaluated. Initial definitions of software safety concepts are presented along with some preliminary thoughts and research questions
Towards design of prognostics and health management solutions for maritime assets
With increase in competition between OEMs of maritime assets and operators alike, the need to maximize the productivity of an equipment and increase operational efficiency and reliability is increasingly stringent and challenging. Also, with the adoption of availability contracts, maritime OEMs are becoming directly interested in understanding the health of their assets in order to maximize profits and to minimize the risk of a system's failure. The key to address these challenges and needs is performance optimization. For this to be possible it is important to understand that system failure can induce downtime which will increase the total cost of ownership, therefore it is important by all means to minimize unscheduled maintenance. If the state of health or condition of a system, subsystem or component is known, condition-based maintenance can be carried out and system design optimization can be achieved thereby reducing total cost of ownership. With the increasing competition with regards to the maritime industry, it is important that the state of health of a component/sub-system/system/asset is known before a vessel embarks on a mission. Any breakdown or malfunction in any part of any system or subsystem on board vessel during the operation offshore will lead to large economic losses and sometimes cause accidents. For example, damages to the fuel oil system of vessel's main engine can result in huge downtime as a result of the vessel not being in operation. This paper presents a prognostic and health management (PHM) development process applied on a fuel oil system powering diesel engines typically used in various cruise and fishing vessels, dredgers, pipe laying vessels and large oil tankers. This process will hopefully enable future PHM solutions for maritime assets to be designed in a more formal and systematic way
Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis
Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions
On a method for mending time to failure distributions
Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all finite failures non-homogeneous Poisson process (NHPP) models share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models. --software reliability growth model,non-homogeneous Poisson process,defective distribution,(mean) time to failure,model unification
- …