416 research outputs found

    Software Reliability Growth Models from the Perspective of Learning Effects and Change-Point.

    Get PDF
    Increased attention towards reliability of software systems has led to the thorough analysis of the process of reliability growth for prediction and assessment of software reliability in the testing or debugging phase. With many frameworks available in terms of the underlying probability distributions like Poisson process, Non-Homogeneous Poisson Process (NHPP), Weibull, etc, many researchers have developed models using the Non-Homogeneous Poisson Process (NHPP) analytical framework. The behavior of interest, usually, is S-shaped or exponential shaped. S-shaped behavior could relate more closely to the human learning. The need to develop different models stems from the fact that nature of the underlying environment, learning effect acquisition during testing, resource allocations, application and the failure data itself vary. There is no universal model that fits everywhere to be called an Oracle. Learning effects that stem from the experiences of the testing or debugging staff have been considered for the growth of reliability. Learning varies over time and this asserts need for conduct of more research for study of learning effects.Digital copy of ThesisUniversity of Kashmi

    Reliability Analysis Methods for an Embedded Open Source Software

    Get PDF

    Hybrid Software Reliability Model for Big Fault Data and Selection of Best Optimizer Using an Estimation Accuracy Function

    Get PDF
    Software reliability analysis has come to the forefront of academia as software applications have grown in size and complexity. Traditionally, methods have focused on minimizing coding errors to guarantee analytic tractability. This causes the estimations to be overly optimistic when using these models. However, it is important to take into account non-software factors, such as human error and hardware failure, in addition to software faults to get reliable estimations. In this research, we examine how big data systems' peculiarities and the need for specialized hardware led to the creation of a hybrid model. We used statistical and soft computing approaches to determine values for the model's parameters, and we explored five criteria values in an effort to identify the most useful method of parameter evaluation for big data systems. For this purpose, we conduct a case study analysis of software failure data from four actual projects. In order to do a comparison, we used the precision of the estimation function for the results. Particle swarm optimization was shown to be the most effective optimization method for the hybrid model constructed with the use of large-scale fault data

    Software Reliability Growth Model with Partial Differential Equation for Various Debugging Processes

    Get PDF
    Most Software Reliability Growth Models (SRGMs) based on the Nonhomogeneous Poisson Process (NHPP) generally assume perfect or imperfect debugging. However, environmental factors introduce great uncertainty for SRGMs in the development and testing phase. We propose a novel NHPP model based on partial differential equation (PDE), to quantify the uncertainties associated with perfect or imperfect debugging process. We represent the environmental uncertainties collectively as a noise of arbitrary correlation. Under the new stochastic framework, one could compute the full statistical information of the debugging process, for example, its probabilistic density function (PDF). Through a number of comparisons with historical data and existing methods, such as the classic NHPP model, the proposed model exhibits a closer fitting to observation. In addition to conventional focus on the mean value of fault detection, the newly derived full statistical information could further help software developers make decisions on system maintenance and risk assessment

    Improvement of software reliability prediction :

    Get PDF

    Resource allocation and optimal release time in software systems

    Get PDF
    Software quality is directly correlated with the number of defects in software systems. As the complexity of software increases, manual inspection of software becomes prohibitively expensive. Thus, defect prediction is of paramount importance to project managers in allocating the limited resources effectively as well as providing many advantages such as the accurate estimation of project costs and schedules. This thesis addresses the issues of statistical fault prediction modeling, software resource allocation, and optimal software release and maintenance policy. A software defect prediction model using operating characteristic curves is presented. The main idea behind this predictor is to use geometric insight in helping construct an efficient prediction method to reliably predict the cumulative number of defects during the software development process. Motivated by the widely used concept of queue models in communication systems and information processing systems, a resource allocation model which answers managerial questions related to project status and scheduling is then introduced. Using the proposed allocation model, managers will be more certain about making resource allocation decisions as well as measuring the system reliability and the quality of service provided to customers in terms of the expected response time. Finally, a novel stochastic model is proposed to describe the cost behavior of the operation, and estimate the optimal time by minimizing a cost function via artificial neural networks. Further, a detailed analysis of software release time and maintenance decision is also presented. The performance of the proposed approaches is validated on real data from actual SAP projects, and the experimental results demonstrate a compelling motivation for improved software qualit
    corecore