112,841 research outputs found
Software Reliability Issues: An Experimental Approach
In this thesis, we present methodologies involving a data structure called the debugging graph whereby the predictive performance of software reliability models can be analyzed and improved under laboratory conditions. This procedure substitutes the averages of large sample sets for the single point samples normally used as inputs to these models and thus supports scrutiny of their performances with less random input data.
Initially, we describe the construction of an extensive database of empirical reliability data which we derived by testing each partially debugged version of subject software represented by complete or partial debugging graphs. We demonstrate how these data can be used to assign relative sizes to known bugs and to simulate multiple debugging sessions. We then present the results from a series of proof-of-concept experiments.
We show that controlling fault recovery order as represented by the data input to some well-known reliability models can enable them to produce more accurate predictions and can mitigate anomalous effects we attribute to manifestations of the fault interaction phenomenon. Since limited testing resources are common in the real world, we demonstrate the use of two approximation techniques, the surrogate oracle and path truncations, to render the application of our methodologies computationally feasible outside a laboratory setting. We report results which support the assertion that reliability data collected from just a partial debugging graph and subject to these approximations qualitatively agrees with those collected under ideal laboratory conditions, provided one accounts for optimistic bias introduced by the surrogate in later prediction stages. We outline an algorithmic approach for using data derived from a partial debugging graph to improve software reliability predictions, and show its complexity to be no worse than O(n2). We summarize some outstanding questions as areas for future investigations of and improvements to the software reliability prediction process
The problems of assessing software reliability ...When you really need to depend on it
This paper looks at the ways in which the reliability of software can be assessed and predicted. It shows that the levels of reliability that can be claimed with scientific justification are relatively modest
Design diversity: an update from research on reliability modelling
Diversity between redundant subsystems is, in various forms, a common design approach for improving system dependability. Its value in the case of software-based systems is still controversial. This paper gives an overview of reliability modelling work we carried out in recent projects on design diversity, presented in the context of previous knowledge and practice. These results provide additional insight for decisions in applying diversity and in assessing diverseredundant systems. A general observation is that, just as diversity is a very general design approach, the models of diversity can help conceptual understanding of a range of different situations. We summarise results in the general modelling of common-mode failure, in inference from observed failure data, and in decision-making for diversity in development.
A bayesian analysis of beta testing
In this article, we define a model for fault detection during the beta testing phase of a software design project. Given sampled data, we illustrate how to estimate the failure rate and the number of faults in the software using Bayesian statistical methods with various different prior distributions. Secondly, given a suitable cost function, we also show how to optimise the duration of a further test period for each one of the prior distribution structures considered
Recommended from our members
The effect of testing on reliability of fault-tolerant software
Previous models have investigated the impact upondiversity - and hence upon the reliability of fault-tolerantsoftware built from 'diverse' versions - of the variation in'difficulty' of demands over the demand space. Thesemodels are essentially static, taking a single snapshotview of the system. In this paper we consider ageneralisation in which the individual versions areallowed to evolve - and their reliability to grow - throughdebugging. In particular, we examine the trade-off thatoccurs in testing between, on the one hand, the increasingreliability of individual versions, and on the other handthe possible diminution of diversity
Cost-benefit modelling for reliability growth
Decisions during the reliability growth development process of engineering equipment involve trade-offs between cost and risk. However slight, there exists a chance an item of equipment will not function as planned during its specified life. Consequently the producer can incur a financial penalty. To date, reliability growth research has focussed on the development of models to estimate the rate of failure from test data. Such models are used to support decisions about the effectiveness of options to improve reliability. The extension of reliability growth models to incorporate financial costs associated with 'unreliability' is much neglected. In this paper, we extend a Bayesian reliability growth model to include cost analysis. The rationale of the stochastic process underpinning the growth model and the cost structures are described. The ways in which this model can be used to support cost-benefit analysis during product development are discussed and illustrated through a simple case
Recommended from our members
Modeling software design diversity
Design diversity has been used for many years now as a means of achieving a degree of fault tolerance in software-based systems. Whilst there is clear evidence that the approach can be expected to deliver some increase in reliability compared with a single version, there is not agreement about the extent of this. More importantly, it remains difficult to evaluate exactly how reliable a particular diverse fault-tolerant system is. This difficulty arises because assumptions of independence of failures between different versions have been shown not to be tenable: assessment of the actual level of dependence present is therefore needed, and this is hard. In this tutorial we survey the modelling issues here, with an emphasis upon the impact these have upon the problem of assessing the reliability of fault tolerant systems. The intended audience is one of designers, assessors and project managers with only a basic knowledge of probabilities, as well as reliability experts without detailed knowledge of software, who seek an introduction to the probabilistic issues in decisions about design diversity
Recommended from our members
Modeling the effects of combining diverse software fault detection techniques
The software engineering literature contains many studies of the efficacy of fault finding techniques. Few of these, however, consider what happens when several different techniques are used together. We show that the effectiveness of such multitechnique approaches depends upon quite subtle interplay between their individual efficacies and dependence between them. The modelling tool we use to study this problem is closely related to earlier work on software design diversity. The earliest of these results showed that, under quite plausible assumptions, it would be unreasonable even to expect software versions that were developed âtruly independentlyâ to fail independently of one another. The key idea here was a âdifficulty functionâ over the input space. Later work extended these ideas to introduce a notion of âforcedâ diversity, in which it became possible to obtain system failure behaviour better even than could be expected if the versions failed independently. In this paper we show that many of these results for design diversity have counterparts in diverse fault detection in a single software version. We define measures of fault finding effectiveness, and of diversity, and show how these might be used to give guidance for the optimal application of different fault finding procedures to a particular program. We show that the effects upon reliability of repeated applications of a particular fault finding procedure are not statistically independent - in fact such an incorrect assumption of independence will always give results that are too optimistic. For diverse fault finding procedures, on the other hand, things are different: here it is possible for effectiveness to be even greater than it would be under an assumption of statistical independence. We show that diversity of fault finding procedures is, in a precisely defined way, âa good thingâ, and should be applied as widely as possible. The new model and its results are illustrated using some data from an experimental investigation into diverse fault finding on a railway signalling application
Recommended from our members
Evaluation of software dependability
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay â i.e. dependability evaluation â a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components â the consequences of the âperversity of natureâ. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures
- âŠ