283,091 research outputs found

    Software reliability prediction using SPN

    Get PDF
    Reliability is an important software quality parameter. In this research for computation of software reliability, component reliability model based on SPN would be proposed. An isomorphic markov chain is obtained from component SPN model. A quantitative reliability prediction method is proposed. The component reliability value is calculated according to the transition cumulative probability distribution of markov chain, obtained from the software SPN model. By means of reliability prediction of the whole software, we'll introduce CRMPN. In CRMPN states are component reliability model and transition are marked with components reliability. With this research more complex software could be simplified and reliability of the software could be evaluated effectively. An example is provided for demonstrating the feasibility and applicability of our method.Keywords: Reliability, SPN, Markov Chain, Component based-softwar

    APPLICATION AND REFINEMENTS OF THE REPS THEORY FOR SAFETY CRITICAL SOFTWARE

    Get PDF
    With the replacement of old analog control systems with software-based digital control systems, there is an urgent need for developing a method to quantitatively and accurately assess the reliability of safety critical software systems. This research focuses on proposing a systematic software metric-based reliability prediction method. The method starts with the measurement of a metric. Measurement results are then either directly linked to software defects through inspections and peer reviews or indirectly linked to software defects through empirical software engineering models. Three types of defect characteristics can be obtained, namely, 1) the number of defects remaining, 2) the number and the exact location of the defects found, and 3) the number and the exact location of defects found in an earlier version. Three models, Musa's exponential model, the PIE model and a mixed Musa-PIE model, are then used to link each of the three categories of defect characteristics with reliability respectively. In addition, the use of the PIE model requires mapping defects identified to an Extended Finite State Machine (EFSM) model. A procedure that can assist in the construction of the EFSM model and increase its repeatability is also provided. This metric-based software reliability prediction method is then applied to a safety-critical software used in the nuclear industry using eleven software metrics. Reliability prediction results are compared with the real reliability assessed by using operational failure data. Experiences and lessons learned from the application are discussed. Based on the results and findings, four software metrics are recommended. This dissertation then focuses on one of the four recommended metrics, Test Coverage. A reliability prediction model based on Test Coverage is discussed in detail and this model is further refined to be able to take into consideration more realistic conditions, such as imperfect debugging and the use of multiple testing phases

    An Analysis of Early Software Reliability Improvement Techniques

    Get PDF
    This research explores early life cycle software reliability prediction models or techniques to predict the reliability of software prior to writing code, and a method for increasing or improving the reliability of software products early in the development life cycle. Five prediction models and two development techniques are examined. Each model is statically analyzed in terms of availability of data early in the life cycle, ease of data collection, and whether data is currently collected. One model and the two techniques satisfied those requirements and are further analyzed for their ability to predict or improve software reliability. While the researchers offer no significant statistical results of the model\u27s ability to predict software reliability, important conclusions are drawn about the cost and time savings of using inspections as a means of improving software reliability. The results indicate that the current software development paradigm needs to be changed to use the Cleanroom Software Development Process for fixture software development. This proactive approach to developing reliable software saves development and testing costs. One obvious benefit of this research is that cost savings realized earlier in the software development cycle have a dramatic effect on making software development practices better and more efficient

    A Hierarchical Framework for Estimating Heterogeneous Architecture-based Software Reliability

    Get PDF
    Problem. The composite model approach that follows a DTMC process with constant failure rate is not analytically tractable for improving its method of solution for estimating software reliability. In this case, a hierarchical approach is preferred to improve accuracy for the method of solution for estimating reliability. Very few studies have been conducted on heterogeneous architecture-based software reliability, and those that have been done use the composite model for reliability estimation. To my knowledge, no research has been done where a hierarchical approach is taken to estimate heterogeneous architecture-based software reliability. This paper explores the use and effectiveness of a hierarchical framework to estimate heterogeneous architecture-based software reliability. -- Method. Concepts of reliability and reliability prediction models for heterogeneous software architecture were surveyed. The different architectural styles were identified as batch-sequential, parallel filter, fault tolerance, and call and return. A method for evaluating these four styles solely on the basis of transition probability was proposed. Four case studies were selected from similar researches which have been done to test the effectiveness of the proposed hierarchical framework. The study assumes that the method of extracting the information about the software architecture was accurate and that the actual reliability of the systems used were free of software errors. -- Results. The percentage difference in results of the reliability estimated by the proposed hierarchical framework compared with the actual reliability was 5.12%, 11.09%, 0.82%, and 52.14% for Cases 1, 2, 3, and 4 respectively. The proposed hierarchical framework did not work for Case 4, which showed much higher values in component utilization and therefore higher interactions between components when compared with the other cases. -- Conclusions. The proposed hierarchical framework generally showed close comparison with the actual reliability of the software systems used in the case studies. However, the results obtained by the proposed hierarchical framework compared to the actual reliability were in disagreement for Case 4. This is due to the higher component interactions in Case 4 when compared with other cases and showed that there are limitations to the extent to which the proposed hierarchical framework can be applied. The reasoning for the limitations of the hierarchical approach has not been cited in any research on the subject matter. Even with the limitations, the hierarchical framework for estimating heterogeneous architecture-based software reliability can still be applied when high accuracy is not required and not too high interactions among components in the software system exist. Thesis (M.S.) -- Andrews University, College of Arts and Sciences, 201

    A Software Reliability Model Combining Representative and Directed Testing

    Get PDF
    Traditionally, software reliability models have required that failure data be gathered using only representative testing methods. Over time, however, representative testing becomes inherently less effective as a means of improving the actual quality of the software under test. Additionally, the use of failure data based on observations made during representative testing has been criticized because of the statistical noise inherent in this type of data. In this dissertation, a testing method is proposed to make reliability testing more efficient and accurate. Representative testing is used early, when the rate of fault revelation is high. Directed testing is used later in testing to take advantage of its faster rate of fault detection. To make use of the test data from this mixed method approach to testing, a software reliability model is developed that permits reliability estimates to be made regardless of the testing method used to gather failure data. The key to being able to combine data from both representative testing and directed testing is shifting the random variable used by the model from observed interfailure times to a postmortem analysis of the debugged faults and using order statistics to combine the observed failure rates of faults no matter how those faults were detected. This shift from interfailure times removes the statistical noise associated with the use of this measure, which should allow models to provide more accurate estimates and predictions. Several experiments were conducted during the course of this research. The results from these experiments show that using the mixed method approach to testing with the new model provides reliability estimates that are at least as good as estimates from existing models under representative testing, while requiring fewer test cases. The results of this work also show that the high level of noise present in failure data based on observed failure times makes it very difficult for models that use this type of data to make accurate reliability estimates. These findings support the suggested move to the use of more stable quantities for reliability estimation and prediction

    Risk Analysis and Reliability Improvement of Mechanistic-Empirical Pavement Design

    Get PDF
    Reliability used in the Mechanistic Empirical Pavement Design Guide (MEPDG) is a congregated indicator defined as the probability that each of the key distress types and smoothness will be less than a selected critical level over the design period. For such a complex system as the MEPDG which does not have closed-form design equations, classic reliability methods are not applicable. A robust reliability analysis can rely on Monte Carlo Simulation (MCS). The ultimate goal of this study was to improve the reliability model of the MEPDG using surrogate modeling techniques and Monte Carlo simulation. To achieve this goal, four tasks were accomplished in this research. First, local calibration using 38 pavement sections was completed to reduce the system bias and dispersion of the nationally calibrated MEPDG. Second, uncertainty and risk in the MEPDG were identified using Hierarchical Holographic Modeling (HHM). To determine the critical factors affecting pavement performance, this study applied not only the traditional sensitivity analysis method but also the risk assessment method using the Analytic Hierarchy Process (AHP). Third, response surface models were built to provide a rapid solution of distress prediction for alligator cracking, rutting and smoothness. Fourth, a new reliability model based on Monte Carlo Simulation was proposed. Using surrogate models, 10,000 Monte Carlo simulations were calculated in minutes to develop the output ensemble, on which the predicted distresses at any reliability level were readily available. The method including all data and algorithms was packed in a user friendly software tool named ReliME. Comparison between the AASHTO 1993 Guide, the MEPDG and ReliME was presented in three case studies. It was found that the smoothness model in MEPDG had an extremely high level of variation. The product from this study was a consistent reliability model specific to local conditions, construction practices and specifications. This framework also presented the feasibility of adopting Monte Carlo Simulation for reliability analysis in future mechanistic empirical pavement design software

    Demonstration of a Response Time Based Remaining Useful Life (RUL) Prediction for Software Systems

    Full text link
    Prognostic and Health Management (PHM) has been widely applied to hardware systems in the electronics and non-electronics domains but has not been explored for software. While software does not decay over time, it can degrade over release cycles. Software health management is confined to diagnostic assessments that identify problems, whereas prognostic assessment potentially indicates when in the future a problem will become detrimental. Relevant research areas such as software defect prediction, software reliability prediction, predictive maintenance of software, software degradation, and software performance prediction, exist, but all of these represent diagnostic models built upon historical data, none of which can predict an RUL for software. This paper addresses the application of PHM concepts to software systems for fault predictions and RUL estimation. Specifically, this paper addresses how PHM can be used to make decisions for software systems such as version update and upgrade, module changes, system reengineering, rejuvenation, maintenance scheduling, budgeting, and total abandonment. This paper presents a method to prognostically and continuously predict the RUL of a software system based on usage parameters (e.g., the numbers and categories of releases) and performance parameters (e.g., response time). The model developed has been validated by comparing actual data, with the results that were generated by predictive models. Statistical validation (regression validation, and k-fold cross validation) has also been carried out. A case study, based on publicly available data for the Bugzilla application is presented. This case study demonstrates that PHM concepts can be applied to software systems and RUL can be calculated to make system management decisions.Comment: This research methodology has opened up new and practical applications in the software domain. In the coming decades, we can expect a significant amount of attention and practical implementation in this area worldwid

    Reliability in open source software

    Get PDF
    Open Source Software is a component or an application whose source code is freely accessible and changeable by the users, subject to constraints expressed in a number of licensing modes. It implies a global alliance for developing quality software with quick bug fixing along with quick evolution of the software features. In the recent year tendency toward adoption of OSS in industrial projects has swiftly increased. Many commercial products use OSS in various fields such as embedded systems, web management systems, and mobile software’s. In addition to these, many OSSs are modified and adopted in software products. According to Netcarf survey more than 58% web servers are using an open source web server, Apache. The swift increase in the taking on of the open source technology is due to its availability, and affordability. Recent empirical research published by Forrester highlighted that although many European software companies have a clear OSS adoption strategy; there are fears and questions about the adoption. All these fears and concerns can be traced back to the quality and reliability of OSS. Reliability is one of the more important characteristics of software quality when considered for commercial use. It is defined as the probability of failure free operation of software for a specified period of time in a specified environment (IEEE Std. 1633-2008). While open source projects routinely provide information about community activity, number of developers and the number of users or downloads, this is not enough to convey information about reliability. Software reliability growth models (SRGM) are frequently used in the literature for the characterization of reliability in industrial software. These models assume that reliability grows after a defect has been detected and fixed. SRGM is a prominent class of software reliability models (SRM). SRM is a mathematical expression that specifies the general form of the software failure process as a function of factors such as fault introduction, fault removal, and the operational environment. Due to defect identification and removal the failure rate (failures per unit of time) of a software system generally decreases over time. Software reliability modeling is done to estimate the form of the curve of the failure rate by statistically estimating the parameters associated with the selected model. The purpose of this measure is twofold: 1) to estimate the extra test time required to meet a specified reliability objective and 2) to identify the expected reliability of the software after release (IEEE Std. 1633-2008). SRGM can be applied to guide the test board in their decision of whether to stop or continue the testing. These models are grouped into concave and S-Shaped models on the basis of assumption about cumulative failure occurrence pattern. The S-Shaped models assume that the occurrence pattern of cumulative number of failures is S-Shaped: initially the testers are not familiar with the product, then they become more familiar and hence there is a slow increase in fault removing. As the testers’ skills improve the rate of uncovering defects increases quickly and then levels off as the residual errors become more difficult to remove. In the concave shaped models the increase in failure intensity reaches a peak before a decrease in failure pattern is observed. Therefore the concave models indicate that the failure intensity is expected to decrease exponentially after a peak was reached. From exhaustive study of the literature I come across three research gaps: SRGM have widely been used for reliability characterization of closed source software (CSS), but 1) there is no universally applicable model that can be applied in all cases, 2) applicability of SRGM for OSS is unclear and 3) there is no agreement on how to select the best model among several alternative models, and no specific empirical methodologies have been proposed, especially for OSS. My PhD work mainly focuses on these three research gaps. In first step, focusing on the first research gap, I analyzed comparatively eight SRGM, including Musa Okumoto, Inflection S-Shaped, Geol Okumoto, Delayed S-Shaped, Logistic, Gompertz and Generalized Geol, in term of their fitting and prediction capabilities. These models have selected due to their wide spread use and they are the most representative in their category. For this study 38 failure datasets of 38 projects have been used. Among 38 projects, 6 were OSS and 32 were CSS. In 32 CSS datasets 22 were from testing phase and remaining 10 were from operational phase (i.e. field). The outcomes show that Musa Okumoto remains the best for CSS projects while Inflection S-Shaped and Gompertz remain best for OSS projects. Apart from that we observe that concave models outperform for CSS and S-Shaped outperform for OSS projects. In the second step, focusing on the second research gap, reliability growth of OSS projects was compared with that of CSS projects. For this purpose 25 OSS and 22 CSS projects were selected with related defect data. Eight SRGM were fitted to the defect data of selected projects and the reliability growth was analyzed with respect to fitted models. I found that the entire selected models fitted to OSS projects defect data in the same manner as that of CSS projects and hence it confirms that OSS projects reliability grows similarly to that of CSS projects. However, I observed that for OSS S-Shaped models outperform and for CSS concave shaped models outperform. To overcome the third research gap I proposed a method that selects the best SRGM among several alternative models for predicting the residuals of an OSS. The method helps the practitioners in deciding whether to adopt an OSS component, or not in a project. We test the method empirically by applying it to twenty one different releases of seven OSS projects. From the validation results it is clear that the method selects the best model 17 times out of 21. In the remaining four it selects the second best model

    A Neuro Fuzzy Algorithm to Compute Software Effort Estimation

    Get PDF
    Software Effort Estimation is highly important and considered to be a primary activity in software project management The accurate estimates are conducted in the development of business case in the earlier stages of project management This accurate prediction helps the investors and customers to identify the total investment and schedule of the project The project developers define process to estimate the effort more accurately with the available mythologies using the attributes of the project The algorithmic estimation models are very simple and reliable but not so accurate The categorical datasets cannot be estimated using the existing techniques Also the attributes of effort estimation are measured in linguistic values which may leads to confusion This paper looks in to the accuracy and reliability of a non-algorithmic approach based on adaptive neuro fuzzy logic in the problem of effort estimation The performance of the proposed method demonstrates that there is a accurate substantiation of the outcomes with the dataset collected from various projects The results were compared for its accuracy using MRE and MMRE as the metrics The research idea in the proposed model for effort estimation is based on project domain and attribute which incorporates the model with more competence in augmenting the crux of neural network to exhibit the advances in software estimatio

    Some Guidelines for Risk Assessment of Vulnerability Discovery Processes

    Get PDF
    Software vulnerabilities can be defined as software faults, which can be exploited as results of security attacks. Security researchers have used data from vulnerability databases to study trends of discovery of new vulnerabilities or propose models for fitting the discovery times and for predicting when new vulnerabilities may be discovered. Estimating the discovery times for new vulnerabilities is useful both for vendors as well as the end-users as it can help with resource allocation strategies over time. Among the research conducted on vulnerability modeling, only a few studies have tried to provide a guideline about which model should be used in a given situation. In other words, assuming the vulnerability data for a software is given, the research questions are the following: Is there any feature in the vulnerability data that could be used for identifying the most appropriate models for that dataset? What models are more accurate for vulnerability discovery process modeling? Can the total number of publicly-known exploited vulnerabilities be predicted using all vulnerabilities reported for a given software? To answer these questions, we propose to characterize the vulnerability discovery process using several common software reliability/vulnerability discovery models, also known as Software Reliability Models (SRMs)/Vulnerability Discovery Models (VDMs). We plan to consider different aspects of vulnerability modeling including curve fitting and prediction. Some existing SRMs/VDMs lack accuracy in the prediction phase. To remedy the situation, three strategies are considered: (1) Finding a new approach for analyzing vulnerability data using common models. In other words, we examine the effect of data manipulation techniques (i.e. clustering, grouping) on vulnerability data, and investigate whether it leads to more accurate predictions. (2) Developing a new model that has better curve filling and prediction capabilities than current models. (3) Developing a new method to predict the total number of publicly-known exploited vulnerabilities using all vulnerabilities reported for a given software. The dissertation is intended to contribute to the science of software reliability analysis and presents some guidelines for vulnerability risk assessment that could be integrated as part of security tools, such as Security Information and Event Management (SIEM) systems
    corecore