3,116 research outputs found

    An Empirical analysis of Open Source Software Defects data through Software Reliability Growth Models

    Get PDF
    The purpose of this study is to analyze the reliability growth of Open Source Software (OSS) using Software Reliability Growth Models (SRGM). This study uses defects data of twenty five different releases of five OSS projects. For each release of the selected projects two types of datasets have been created; datasets developed with respect to defect creation date (created date DS) and datasets developed with respect to defect updated date (updated date DS). These defects datasets are modelled by eight SRGMs; Musa Okumoto, Inflection S-Shaped, Goel Okumoto, Delayed S-Shaped, Logistic, Gompertz, Yamada Exponential, and Generalized Goel Model. These models are chosen due to their widespread use in the literature. The SRGMs are fitted to both types of defects datasets of each project and the their fitting and prediction capabilities are analysed in order to study the OSS reliability growth with respect to defects creation and defects updating time because defect analysis can be used as a constructive reliability predictor. Results show that SRGMs fitting capabilities and prediction qualities directly increase when defects creation date is used for developing OSS defect datasets to characterize the reliability growth of OSS. Hence OSS reliability growth can be characterized with SRGM in a better way if the defect creation date is taken instead of defects updating (fixing) date while developing OSS defects datasets in their reliability modellin

    Quality Analysis of Software Applications using Software Reliability Growth Models and Deep Learning Models

    Get PDF
    Finding the faults in the software is a very tedious task. Many software companies are trying to develop high-quality software which is having no faults. It is very important to analyze the errors, faults, and bugs in software development. Software reliability growth models (SRGM's) are used to help the software industries to create quality software products. Quality is the software metric that is used to analyze the performance of the software product. The software product which is having no errors or faults is considered the best software product. SRGM is also utilized to analyze the software quality based on the programming language. Deep Learning (DL) is a sub-domain in machine learning to solve several complex issues in software development. Finding accurate patterns from software faults is a very tedious task. DL algorithm performs better in integrating the SRGM with the DL approaches giving better results based on software fault detection. Many software faults real-time datasets are available to analyze the DL approaches. The performances of the various integrated models are analyzed by showing the quality metrics

    Statistical modelling of software reliability

    Get PDF
    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety

    Software Reliability Issues: An Experimental Approach

    Get PDF
    In this thesis, we present methodologies involving a data structure called the debugging graph whereby the predictive performance of software reliability models can be analyzed and improved under laboratory conditions. This procedure substitutes the averages of large sample sets for the single point samples normally used as inputs to these models and thus supports scrutiny of their performances with less random input data. Initially, we describe the construction of an extensive database of empirical reliability data which we derived by testing each partially debugged version of subject software represented by complete or partial debugging graphs. We demonstrate how these data can be used to assign relative sizes to known bugs and to simulate multiple debugging sessions. We then present the results from a series of proof-of-concept experiments. We show that controlling fault recovery order as represented by the data input to some well-known reliability models can enable them to produce more accurate predictions and can mitigate anomalous effects we attribute to manifestations of the fault interaction phenomenon. Since limited testing resources are common in the real world, we demonstrate the use of two approximation techniques, the surrogate oracle and path truncations, to render the application of our methodologies computationally feasible outside a laboratory setting. We report results which support the assertion that reliability data collected from just a partial debugging graph and subject to these approximations qualitatively agrees with those collected under ideal laboratory conditions, provided one accounts for optimistic bias introduced by the surrogate in later prediction stages. We outline an algorithmic approach for using data derived from a partial debugging graph to improve software reliability predictions, and show its complexity to be no worse than O(n2). We summarize some outstanding questions as areas for future investigations of and improvements to the software reliability prediction process

    Open Source, Agile and Reliability Measures

    Full text link
    As open source and agile development do work in some circumstances, particularly with regard to shorter and more frequent release policy, we wonder whether the defect profile (reliability growth) found in the open-source projects so far is typical of open-source software development or of software developed iteratively and incrementally. To investigate this, we examined an open source web testing tool developed by an agile leading company. The results of this analysis indicate two findings. First, it supports the tentative findings that iteratively developed software does not exhibit a standard reliability growth in the defect modeling, and second, somewhat surprisingly that the defect density is reducing, as a sign of improving in quality yet the normal measure of software reliability are not useful

    Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    Get PDF
    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule

    Improving Software Performance in the Compute Unified Device Architecture

    Get PDF
    This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA). We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s) memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU), like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge) tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.Compute Unified Device Architecture, Fermi Architecture, Naive Transpose, Coalesced Transpose, Shared Memory Copy, Loop in Kernel, Loop over Kernel

    A Comparative Analysis of Software Reliability Growth Models using defects data of Closed and Open Source Software

    Get PDF
    The purpose of this study is to compare the fitting (goodness of fit) and prediction capability of eight Software Reliability Growth Models (SRGM) using fifty different failure Data sets. These data sets contain defect data collected from system test phase, operational phase (field defects) and Open Source Software (OSS) projects. The failure data are modelled by eight SRGM (Musa Okumoto, Inflection S-Shaped, Goel Okumoto, Delayed S-Shaped, Logistic, Gompertz, Yamada Exponential, and Generalized Goel Model). These models are chosen due to their prevalence among many software reliability models. The results can be summarized as follows -Fitting capability: Musa Okumoto fits all data sets, but all models fit all the OSS datasets -Prediction capability: Musa Okumoto, Inflection S- Shaped and Goel Okumoto are the best predictors for industrial data sets, Gompertz and Yamada are the best predictors for OSS data sets - Fitting and prediction capability: Musa Okumoto and Inflection are the best performers on industrial datasets. However this happens only on slightly more than 50% of the datasets. Gompertz and Inflection are the best performers for all OSS dataset
    • 

    corecore