195,124 research outputs found

    Modelling Open-Source Software Reliability Incorporating Swarm Intelligence-Based Techniques

    Full text link
    In the software industry, two software engineering development best practices coexist: open-source and closed-source software. The former has a shared code that anyone can contribute, whereas the latter has a proprietary code that only the owner can access. Software reliability is crucial in the industry when a new product or update is released. Applying meta-heuristic optimization algorithms for closed-source software reliability prediction has produced significant and accurate results. Now, open-source software dominates the landscape of cloud-based systems. Therefore, providing results on open-source software reliability - as a quality indicator - would greatly help solve the open-source software reliability growth-modelling problem. The reliability is predicted by estimating the parameters of the software reliability models. As software reliability models are inherently nonlinear, traditional approaches make estimating the appropriate parameters difficult and ineffective. Consequently, software reliability models necessitate a high-quality parameter estimation technique. These objectives dictate the exploration of potential applications of meta-heuristic swarm intelligence optimization algorithms for optimizing the parameter estimation of nonhomogeneous Poisson process-based open-source software reliability modelling. The optimization algorithms are firefly, social spider, artificial bee colony, grey wolf, particle swarm, moth flame, and whale. The applicability and performance evaluation of the optimization modelling approach is demonstrated through two real open-source software reliability datasets. The results are promising.Comment: 14 pages, 11 figures, 7 table

    Assessing Software Reliability Using Modified Genetic Algorithm: Inflection S-Shaped Model

    Get PDF
    In order to assess software reliability, many software reliability growth models (SRGMs) have been proposed in the past four decades. In principle, two widely used methods for the parameter estimation of SRGMs are the maximum likelihood estimation (MLE) and the least squares estimation (LSE). However, the approach of these two estimations may impose some restrictions on SRGMs, such as the existence of derivatives from formulated models or the needs for complex calculation. In this paper, we propose a modified genetic algorithm (MGA) to assess the reliability of software considering the Time domain software failure data using Inflection S-shaped model which is NonHomogenous Poisson Process (NHPP) based. Experiments based on real software failure data are performed, and the results show that the proposed genetic algorithm is more effective and faster than traditional algorithms

    Improving Software Reliability Predictions Through Incorporating Learning Effects

    Get PDF
    Software reliability is one of the major metrics for software quality evaluation. In reliability engineering, testing phase specifying the process of measuring software reliability. In this paper; we examine the effect of incorporating the autonomous errors detected factor and learning factor in enhancing the prediction accuracy with application to the software failure data. For this purpose, Non-Homogenous Poisson Process (NHPP) model with the perspective of learning effects based on the Log-Logistic (LL) distribution is proposed. The parameter estimation using the Non-Linear Least Squares Estimation (NLSE) method is conducted. Two goodness-of-fit tests are used to evaluate the proposed models. This paper encourages software developers for considering the learning effects property in software reliability modeling

    Assessing Software Reliability Using Exponential Imperfect Debugging Model

    Get PDF
    Software reliability is one of the most important characteristics of software quality. As the usage of software reliability is growing rapidly, accessing the software reliability is a critical task in development of a software system. So, many Software Reliability Growth Models (SRGM) are used in order to decide upon the reliable or unreliable of the developed software very quickly. The well known software reliability growth model called as Exponential Imperfect Debugging model is a two parameter Non Homogeneous Poisson Process model which is widely used in software reliability growth modeling. In this paper, we propose to apply Statistical Process Control (SPC) to monitor software reliability process. A control mechanism is proposed based on the cumulative observations of failures which are grouped using mean value function of the Exponential Imperfect Debugging model. The Maximum Likelihood Estimation (MLE) approach is used to estimate the unknown parameters of the model. The process is illustrated by applying to real software failure data

    Reliability demonstration of a multi-component Weibull system under zero-failure assumption.

    Get PDF
    This dissertation is focused on finding lower confidence limits for the reliability of systems consisting of Wei bull components when the reliability demonstration testing (RDT) is conducted with zero failures. The usual methods for the parameter estimation of the underlying reliability functions like maximum likelihood estimator (MLE) or mean squares estimator (MSE) cannot be applied if the test data contains no failures. For single items there exists a methodology to calculate the lower confidence limit (LCL) of reliability for a certain confidence level. But there is no comparable method for systems. This dissertation provides a literature review on specific topics within the wide area of reliability engineering. Based on this and additional research work, a first theorem for the LCL of system reliability of systems with Weibull components is formulated. It can be applied if testing is conducted with zero observed failures. This theorem is unique in that it allows for different Wei bull shape parameters for components in the system. The model can also be applied if each component has been exposed to different test durations. This can result from accelerated life testing (AL T) with test procedures that have different acceleration factors for the various failure modes or components respectively. A second theorem for Ex -lifetime, derived from the first theorem, has been formulated as well. The first theorem on LCL of system reliability is firstly proven for systems with two components only. In the following the proof is extended towards the general case of n components. There is no limitation on the number of components n. The proof of the second theorem on Bx - lifetime is based on the first proof and utilizes the relation between Bx and reliability. The proven theorem is integrated into a model to analyze the sensitivity of the estimation of the Wei bull shape parameter p. This model is also applicable if the Weibull parameter is subject to either total uncertainty or of uncertainty within a defined range. The proven theorems can be utilized as the core of various models to optimize RDT plans in a way that the targets for the validation can be achieved most efficiently. The optimization can be conducted with respect to reliability, Bx -lifetime or validation cost. The respective optimization models are mixed-integer and highly non-linear and therefore very difficult to solve. Within this research work the software package LINGO™ was utilized to solve the models. There is a proposal included of how to implement the optimization models for RDT testing into the reliability process in order to iteratively optimize the RDT program based on failures occurred or changing boundary conditions and premises. The dissertation closes with the presentation of a methodology for the consideration of information about the customer usage for certain segments such as market share, annual mileage or component specific stress level for each segment. This methodology can be combined with the optimization models for RDT plans

    Hybrid Software Reliability Model for Big Fault Data and Selection of Best Optimizer Using an Estimation Accuracy Function

    Get PDF
    Software reliability analysis has come to the forefront of academia as software applications have grown in size and complexity. Traditionally, methods have focused on minimizing coding errors to guarantee analytic tractability. This causes the estimations to be overly optimistic when using these models. However, it is important to take into account non-software factors, such as human error and hardware failure, in addition to software faults to get reliable estimations. In this research, we examine how big data systems' peculiarities and the need for specialized hardware led to the creation of a hybrid model. We used statistical and soft computing approaches to determine values for the model's parameters, and we explored five criteria values in an effort to identify the most useful method of parameter evaluation for big data systems. For this purpose, we conduct a case study analysis of software failure data from four actual projects. In order to do a comparison, we used the precision of the estimation function for the results. Particle swarm optimization was shown to be the most effective optimization method for the hybrid model constructed with the use of large-scale fault data

    Development of a Method for Incorporating Fault Codes in Prognostic Analysis

    Get PDF
    Information from fault codes associated with a component may be used as an indicator of its health. A fault code is defined as a timestamp at which a component is not operating according to recommended guidelines. The type of fault codes which are relevant for this analysis represent mild or moderate deviations from normal behavior, rather than those requiring immediate repair. Potentially, fault codes may be used to determine the Remaining Useful Life (RUL) of a component by predicting its failure time, which will improve safety and reduce maintenance costs associated with the component. In this dissertation, methods have been developed to integrate the degradation information from fault codes into an existing prognostic parameter to improve the estimation of RUL. Optimization methods such as gradient descent were used to weight each fault code based on their relevance to degradation. Furthermore, topic models, a document analysis and clustering technique, were used as both a dimension-reduction method and fault mode isolation. Methods developed for this dissertation were applied to two real-world data sets, an actuator system and monitored signals from a motor accelerated degradation experiment. The best estimation of RUL for the actuator system was a topic model with a mean absolute error of 6.41% of the data received, and the best estimation of RUL for the motor accelerated degradation experiment was 5.7% of the average lifetime of the motors. The primary contributions of this research includes a method to construct a prognostic parameter from fault codes alone, the integration of degradation information from fault codes into an existing prognostic parameter, the use of topic models in reliability analysis of fault codes, and a software suite that performs these functions on generic data sets

    Software Reliability Models

    Get PDF
    The problem considered here is the building of Non-homogeneous Poisson Process (NHPP) model. Currently existing popular NHPP process models like Goel-Okumoto (G-O) and Yamada et al models suffer from the drawback that the probability density function of the inter-failure times is an improper density function. This is because the event no failure in (0, oo] is allowed in these models. In real life situations we cannot draw sample(s) from such a population and also none of the moments of inter-failure times exist. Therefore, these models are unsuitable for modelling real software error data. On the other hand if the density function of the inter-failure times is made proper by multiplying with a constant, then we cannot assume finite number of expected faults in the system which is the basic assumption in building the software reliability models. Taking these factors into consideration, we have introduced an extra parameter, say c, in both the G -0 and Yamada et al models in order to get a new model. We find that a specific value of this new parameter gives rise to a proper density for inter-failure times. The G -0 and Yamada et al models are special cases of these models corresponding to c = 0. This raises the question - “Can we do better than existing G -0 and Yamada et al models when 0 \u3c c \u3c 1 ?”. The answer is ‘yes’. With this objective, the behavior of the software failure counting process { N ( t ) , t \u3e 0} has been studied. Several measures, such as the number of failures by some prespecified time, the number of errors remaining in the system at a future time, distribution of remaining number of faults in the system and reliability during a mission have been proposed in this research. Maximum likelihood estimation method was used to estimate the parameters. Sufficient conditions for the existence of roots of the ML equations were derived. Some of the important statistical aspects of G -0 and Yamada et al models, like conditions for the existence and uniqueness of the ML equations, were not worked out so far in the literature. We have derived these conditions and proved uniqueness of the roots for these models. Finally four different sets of actual failure time data were analyzed. i
    corecore