276,876 research outputs found

    The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    Get PDF
    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage

    On a method for mending time to failure distributions

    Get PDF
    Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all finite failures non-homogeneous Poisson process (NHPP) models share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models. --software reliability growth model,non-homogeneous Poisson process,defective distribution,(mean) time to failure,model unification

    Reliability growth of open source software using defect analysis

    Full text link
    We examine two active and popular open source products to observe whether or not open source software has a different defect arrival rate than software developed in-house. The evaluation used two common models of reliability growth models; concave and S-shaped and this analysis shows that open source has a different profile of defect arrival. Further investigation indicated that low level design instability is a possible explanation of the different defect growth profile. © 2008 IEEE

    Validation of Ultrahigh Dependability for Software-Based Systems

    Get PDF
    Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software

    Software Quality Assurance

    Get PDF
    Telecom networks are composed of very complex software-controlled systems. In recent years, business and technology needs are pushing vendors towards service agility where they must continuously develop, deliver, and improve such software over very short cycles. Moreover, being critical infrastructure, Telecom systems must meet important operational, legal, and regulatory requirements in terms of quality and performance to avoid outages. To ensure high quality software, processes and models must be put in place to enable quick and easy decision making across the development cycle. In this chapter, we will discuss the background and recent trends in software quality assurance. We will then introduce BRACE: a cloud-based, fully-automated tool for software defect prediction, reliability and availability modeling and analytics. In particular, we will discuss a novel Software Reliability Growth Modeling (SRGM) algorithm that is the core of BRACE. The algorithm provides defect prediction for both early and late stages of the software development cycle. To illustrate and validate the tool and algorithm, we also discuss key use cases, including actual defect and outage data from two large-scale software development projects from telecom products. BRACE is being successfully used by global teams of various large-scale software development projects

    Quality Analysis of Software Applications using Software Reliability Growth Models and Deep Learning Models

    Get PDF
    Finding the faults in the software is a very tedious task. Many software companies are trying to develop high-quality software which is having no faults. It is very important to analyze the errors, faults, and bugs in software development. Software reliability growth models (SRGM's) are used to help the software industries to create quality software products. Quality is the software metric that is used to analyze the performance of the software product. The software product which is having no errors or faults is considered the best software product. SRGM is also utilized to analyze the software quality based on the programming language. Deep Learning (DL) is a sub-domain in machine learning to solve several complex issues in software development. Finding accurate patterns from software faults is a very tedious task. DL algorithm performs better in integrating the SRGM with the DL approaches giving better results based on software fault detection. Many software faults real-time datasets are available to analyze the DL approaches. The performances of the various integrated models are analyzed by showing the quality metrics

    Entropy based Software Reliability Growth Modelling for Open Source Software Evolution

    Get PDF
    During Open Source Software (OSS) development, users submit "new features (NFs)", "feature improvements (IMPs)" and bugs to fix. A proportion of these issues get fixed before the next software release. During the introduction of NFs and IMPs, the source code files change. A proportion of these source code changes may result in generation of bugs. We have developed calendar time and entropy-dependent mathematical models to represent the growth of OSS based on the rate at which NFs are added, IMPs are added, and bugs introduction rate.The empirical validation has been conducted on five products, namely "Avro, Pig, Hive, jUDDI and Whirr" of the Apache open source project. We compared the proposed models with eminent reliability growth models, Goel and Okumoto (1979) and Yamada et al. (1983) and found that the proposed models exhibit better goodness of fit
    • …
    corecore