16 research outputs found

    Improving software reliability growth model selection ranking using particle swarm optimization

    Get PDF
    Reliability of software always related to software failures and a number of software reliability growth models (SRGMs) have been proposed past few decades to predict software reliability. Different characteristics of SRGM leading to the study and practices of SRGM selection for different domains. Appropriate model must be chosen for suitable domain in order to predict the occurrence of the software failures accurately then help to estimate the overall cost of the project and delivery time. In this paper, particle swarm optimization (PSO) method is used to optimize a parameter estimation and distance based approach (DBA) is used to produce SRGM model selection ranking. The study concluded that the use of PSO for optimizing the SRGM’s parameter has provided more accurate reliability prediction and improved model selection rankings. The model selection ranking methodology can facilitate a software developer to concentrate and analyze in making a decision to select suitable SRGM during testing phases. � 2005 - 2017 JATIT & LLS

    SOFTWARE QUALITY: DUAL EXPERTS OPINION AND CONDITIONAL BASED AGGREGATION METHOD

    Get PDF
    The software reliability is the significant factor to find out software failures in software development Life Cycle. The one more factor considered is the quality of software measurement process. These two factors are mostly considered for the possibility of execution of the software without failures in a software development life cycle. The software reliability and software quality cannot be predicted accurately because of its unsuccessful detection of failures in certain scenarios. This paper mainly focuses on improving the software engineering metrics using an expert opinion and in order to resolve the software failures. On choosing the software engineering measures there are different types of problem that are been occurred in that in this paper we have taken two main issues. The first issue is number of measures that are utilized in estimating software quality and these software measures are chosen with the help of expert opinion. However, the experts are humans so they may have less adequate knowledge about different software evaluations. The Problem is resolved by taking consideration with first level and second level of experts ’ opinion for selecting the best measures for software quality. The second issue is of data aggregation function which is not suitable for large number of data aggregations, here in this paper we select a prioritized opinion for data aggregation. The prioritization is based on number of experts involved in each life-cycle phase of software development with time duration to give the opinion. Finally the experiments results are shown for the software quality improvisation by the proposed framework

    A Comparative Analysis of Software Reliability Growth Models using defects data of Closed and Open Source Software

    Get PDF
    The purpose of this study is to compare the fitting (goodness of fit) and prediction capability of eight Software Reliability Growth Models (SRGM) using fifty different failure Data sets. These data sets contain defect data collected from system test phase, operational phase (field defects) and Open Source Software (OSS) projects. The failure data are modelled by eight SRGM (Musa Okumoto, Inflection S-Shaped, Goel Okumoto, Delayed S-Shaped, Logistic, Gompertz, Yamada Exponential, and Generalized Goel Model). These models are chosen due to their prevalence among many software reliability models. The results can be summarized as follows -Fitting capability: Musa Okumoto fits all data sets, but all models fit all the OSS datasets -Prediction capability: Musa Okumoto, Inflection S- Shaped and Goel Okumoto are the best predictors for industrial data sets, Gompertz and Yamada are the best predictors for OSS data sets - Fitting and prediction capability: Musa Okumoto and Inflection are the best performers on industrial datasets. However this happens only on slightly more than 50% of the datasets. Gompertz and Inflection are the best performers for all OSS dataset

    A Review of Software Reliability Testing Techniques

    Get PDF
    In the era of intelligent systems, the safety and reliability of software have received more attention. Software reliability testing is a significant method to ensure reliability, safety and quality of software. The intelligent software technology has not only offered new opportunities but also posed challenges to software reliability technology. The focus of this paper is to explore the software reliability testing technology under the impact of intelligent software technology. In this study, the basic theories of traditional software and intelligent software reliability testing were investigated via related previous works, and a general software reliability testing framework was established. Then, the technologies of software reliability testing were analyzed, including reliability modeling, test case generation, reliability evaluation, testing criteria and testing methods. Finally, the challenges and opportunities of software reliability testing technology were discussed at the end of this paper

    Reliability in open source software

    Get PDF
    Open Source Software is a component or an application whose source code is freely accessible and changeable by the users, subject to constraints expressed in a number of licensing modes. It implies a global alliance for developing quality software with quick bug fixing along with quick evolution of the software features. In the recent year tendency toward adoption of OSS in industrial projects has swiftly increased. Many commercial products use OSS in various fields such as embedded systems, web management systems, and mobile software’s. In addition to these, many OSSs are modified and adopted in software products. According to Netcarf survey more than 58% web servers are using an open source web server, Apache. The swift increase in the taking on of the open source technology is due to its availability, and affordability. Recent empirical research published by Forrester highlighted that although many European software companies have a clear OSS adoption strategy; there are fears and questions about the adoption. All these fears and concerns can be traced back to the quality and reliability of OSS. Reliability is one of the more important characteristics of software quality when considered for commercial use. It is defined as the probability of failure free operation of software for a specified period of time in a specified environment (IEEE Std. 1633-2008). While open source projects routinely provide information about community activity, number of developers and the number of users or downloads, this is not enough to convey information about reliability. Software reliability growth models (SRGM) are frequently used in the literature for the characterization of reliability in industrial software. These models assume that reliability grows after a defect has been detected and fixed. SRGM is a prominent class of software reliability models (SRM). SRM is a mathematical expression that specifies the general form of the software failure process as a function of factors such as fault introduction, fault removal, and the operational environment. Due to defect identification and removal the failure rate (failures per unit of time) of a software system generally decreases over time. Software reliability modeling is done to estimate the form of the curve of the failure rate by statistically estimating the parameters associated with the selected model. The purpose of this measure is twofold: 1) to estimate the extra test time required to meet a specified reliability objective and 2) to identify the expected reliability of the software after release (IEEE Std. 1633-2008). SRGM can be applied to guide the test board in their decision of whether to stop or continue the testing. These models are grouped into concave and S-Shaped models on the basis of assumption about cumulative failure occurrence pattern. The S-Shaped models assume that the occurrence pattern of cumulative number of failures is S-Shaped: initially the testers are not familiar with the product, then they become more familiar and hence there is a slow increase in fault removing. As the testers’ skills improve the rate of uncovering defects increases quickly and then levels off as the residual errors become more difficult to remove. In the concave shaped models the increase in failure intensity reaches a peak before a decrease in failure pattern is observed. Therefore the concave models indicate that the failure intensity is expected to decrease exponentially after a peak was reached. From exhaustive study of the literature I come across three research gaps: SRGM have widely been used for reliability characterization of closed source software (CSS), but 1) there is no universally applicable model that can be applied in all cases, 2) applicability of SRGM for OSS is unclear and 3) there is no agreement on how to select the best model among several alternative models, and no specific empirical methodologies have been proposed, especially for OSS. My PhD work mainly focuses on these three research gaps. In first step, focusing on the first research gap, I analyzed comparatively eight SRGM, including Musa Okumoto, Inflection S-Shaped, Geol Okumoto, Delayed S-Shaped, Logistic, Gompertz and Generalized Geol, in term of their fitting and prediction capabilities. These models have selected due to their wide spread use and they are the most representative in their category. For this study 38 failure datasets of 38 projects have been used. Among 38 projects, 6 were OSS and 32 were CSS. In 32 CSS datasets 22 were from testing phase and remaining 10 were from operational phase (i.e. field). The outcomes show that Musa Okumoto remains the best for CSS projects while Inflection S-Shaped and Gompertz remain best for OSS projects. Apart from that we observe that concave models outperform for CSS and S-Shaped outperform for OSS projects. In the second step, focusing on the second research gap, reliability growth of OSS projects was compared with that of CSS projects. For this purpose 25 OSS and 22 CSS projects were selected with related defect data. Eight SRGM were fitted to the defect data of selected projects and the reliability growth was analyzed with respect to fitted models. I found that the entire selected models fitted to OSS projects defect data in the same manner as that of CSS projects and hence it confirms that OSS projects reliability grows similarly to that of CSS projects. However, I observed that for OSS S-Shaped models outperform and for CSS concave shaped models outperform. To overcome the third research gap I proposed a method that selects the best SRGM among several alternative models for predicting the residuals of an OSS. The method helps the practitioners in deciding whether to adopt an OSS component, or not in a project. We test the method empirically by applying it to twenty one different releases of seven OSS projects. From the validation results it is clear that the method selects the best model 17 times out of 21. In the remaining four it selects the second best model

    Vulnerability discovery in multiple version software systems: open source and commercial software systems

    Get PDF
    Department Head: L. Darrell Whitley.2007 Summer.Includes bibliographical references (pages 80-83).The vulnerability discovery process for a program describes the rate at which the vulnerabilities are discovered. A model of the discovery process can be used to estimate the number of vulnerabilities likely to be discovered in the near future. Past studies have considered vulnerability discovery only for individual software versions, without considering the impact of shared code among successive versions and the evolution of source code. These affecting factors in vulnerability discovery process need to be taken into account estimate the future software vulnerability discovery trend more accurately. This thesis examines possible approaches for taking these factors into account in the previous works. We implemented these factors on vulnerability discovery process. We examine a new approach for quantitatively vulnerability discovery process, based on shared source code measurements among multiple version software system. The applicability of the approach is examined using Apache HTTP Web server and Mysql DataBase Management System (DBMS). The result of this approach shows better goodness of fit than fitting result in the previous researches. Using this revised software vulnerability discovery process, the superposition effect which is an unexpected vulnerability discovery in the previous researches could be determined by software discovery model. The multiple software vulnerability discovery model (MVDM) shows that vulnerability discovery rate is different with single vulnerability discovery model's (SVDM) discovery rate because of newly considered factors. From these result, we create and applied new SVDM for open source and commercial software. This single vulnerability process is examined, and the model testing result shows that SVDM can be an alternative modeling. The modified vulnerability discovery model will be presented for supporting previous researches' weakness, and the theoretical modeling will be discuss for more accurate explanation

    Software Reliability Growth Models from the Perspective of Learning Effects and Change-Point.

    Get PDF
    Increased attention towards reliability of software systems has led to the thorough analysis of the process of reliability growth for prediction and assessment of software reliability in the testing or debugging phase. With many frameworks available in terms of the underlying probability distributions like Poisson process, Non-Homogeneous Poisson Process (NHPP), Weibull, etc, many researchers have developed models using the Non-Homogeneous Poisson Process (NHPP) analytical framework. The behavior of interest, usually, is S-shaped or exponential shaped. S-shaped behavior could relate more closely to the human learning. The need to develop different models stems from the fact that nature of the underlying environment, learning effect acquisition during testing, resource allocations, application and the failure data itself vary. There is no universal model that fits everywhere to be called an Oracle. Learning effects that stem from the experiences of the testing or debugging staff have been considered for the growth of reliability. Learning varies over time and this asserts need for conduct of more research for study of learning effects.Digital copy of ThesisUniversity of Kashmi
    corecore