59,429 research outputs found
Software reliability prediction using neural network
Software engineering is incomplete without Software reliability prediction. For characterising any software product quality quantitatively during phase of testing, the most important factor is software reliability assessment. Many analytical models were being proposed over the years for assessing the reliability of a software system and for modeling the growth trends of software reliability with different capabilities of prediction at different testing phases. But it is needed for developing such a single model which can be applicable for a relatively better prediction in all conditions and situations. For this the Neural Network (NN) model approach is introduced. In this thesis report the applicability of the models based on NN for better reliability prediction in a real environment is described and a method of assessment of growth of software reliability using NN model is presented. Mainly two types of NNs are used here. One is feed forward neural network and another is recurrent neural network. For modeling both networks, back propagation learning algorithm is implemented and the related network architecture issues, data representation methods and some unreal assumptions associated with software reliability models are discussed. Different datasets containing software failures are applied to the proposed models. These datasets are obtained from several software projects. Then it is observed that the results obtained indicate a significant improvement in performance by using neural network models over conventional statistical models based on non homogeneous Poisson process
Confidence intervals for reliability growth models with small sample sizes
Fully Bayesian approaches to analysis can be overly ambitious where there exist realistic limitations on the ability of experts to provide prior distributions for all relevant parameters. This research was motivated by situations where expert judgement exists to support the development of prior distributions describing the number of faults potentially inherent within a design but could not support useful descriptions of the rate at which they would be detected during a reliability-growth test. This paper develops inference properties for a reliability-growth model. The approach assumes a prior distribution for the ultimate number of faults that would be exposed if testing were to continue ad infinitum, but estimates the parameters of the intensity function empirically. A fixed-point iteration procedure to obtain the maximum likelihood estimate is investigated for bias and conditions of existence. The main purpose of this model is to support inference in situations where failure data are few. A procedure for providing statistical confidence intervals is investigated and shown to be suitable for small sample sizes. An application of these techniques is illustrated by an example
Recommended from our members
Reliability modeling of a 1-out-of-2 system: Research with diverse Off-the-shelf SQL database servers
Fault tolerance via design diversity is often the only viable way of achieving sufficient dependability levels when using off-the-shelf components. We have reported previously on studies with bug reports of four open-source and commercial off-the-shelf database servers and later release of two of them. The results were very promising for designers of fault-tolerant solutions that wish to employ diverse servers: very few bugs caused failures in more than one server and none caused failure in more than two. In this paper we offer details of two approaches we have studied to construct reliability growth models for a 1-out-of-2 fault-tolerant server which utilize the bug reports. The models presented are of practical significance to system designers wishing to employ diversity with off-the-shelf components since often the bug reports are the only direct dependability evidence available to them
An Empirical analysis of Open Source Software Defects data through Software Reliability Growth Models
The purpose of this study is to analyze the reliability growth of Open Source Software (OSS) using Software Reliability Growth Models (SRGM). This study uses defects data of twenty five different releases of five OSS projects. For each release of the selected projects two types of datasets have been created; datasets developed with respect to defect creation date (created date DS) and datasets developed with respect to defect updated date (updated date DS). These defects datasets are modelled by eight SRGMs; Musa Okumoto, Inflection S-Shaped, Goel Okumoto, Delayed S-Shaped, Logistic, Gompertz, Yamada Exponential, and Generalized Goel Model. These models are chosen due to their widespread use in the literature. The SRGMs are fitted to both types of defects datasets of each project and the their fitting and prediction capabilities are analysed in order to study the OSS reliability growth with respect to defects creation and defects updating time because defect analysis can be used as a constructive reliability predictor. Results show that SRGMs fitting capabilities and prediction qualities directly increase when defects creation date is used for developing OSS defect datasets to characterize the reliability growth of OSS. Hence OSS reliability growth can be characterized with SRGM in a better way if the defect creation date is taken instead of defects updating (fixing) date while developing OSS defects datasets in their reliability modellin
Statistical modelling of software reliability
During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety
Do System Test Cases Grow Old?
Companies increasingly use either manual or automated system testing to
ensure the quality of their software products. As a system evolves and is
extended with new features the test suite also typically grows as new test
cases are added. To ensure software quality throughout this process the test
suite is continously executed, often on a daily basis. It seems likely that
newly added tests would be more likely to fail than older tests but this has
not been investigated in any detail on large-scale, industrial software
systems. Also it is not clear which methods should be used to conduct such an
analysis. This paper proposes three main concepts that can be used to
investigate aging effects in the use and failure behavior of system test cases:
test case activation curves, test case hazard curves, and test case half-life.
To evaluate these concepts and the type of analysis they enable we apply them
on an industrial software system containing more than one million lines of
code. The data sets comes from a total of 1,620 system test cases executed a
total of more than half a million times over a time period of two and a half
years. For the investigated system we find that system test cases stay active
as they age but really do grow old; they go through an infant mortality phase
with higher failure rates which then decline over time. The test case half-life
is between 5 to 12 months for the two studied data sets.Comment: Updated with nicer figs without border around the
Design of an integrated airframe/propulsion control system architecture
The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture
On a method for mending time to failure distributions
Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all finite failures non-homogeneous Poisson process (NHPP) models share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models. --software reliability growth model,non-homogeneous Poisson process,defective distribution,(mean) time to failure,model unification
- …