533 research outputs found

    On a method for mending time to failure distributions

    Get PDF
    Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all finite failures non-homogeneous Poisson process (NHPP) models share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models. --software reliability growth model,non-homogeneous Poisson process,defective distribution,(mean) time to failure,model unification

    Confidence intervals for reliability growth models with small sample sizes

    Get PDF
    Fully Bayesian approaches to analysis can be overly ambitious where there exist realistic limitations on the ability of experts to provide prior distributions for all relevant parameters. This research was motivated by situations where expert judgement exists to support the development of prior distributions describing the number of faults potentially inherent within a design but could not support useful descriptions of the rate at which they would be detected during a reliability-growth test. This paper develops inference properties for a reliability-growth model. The approach assumes a prior distribution for the ultimate number of faults that would be exposed if testing were to continue ad infinitum, but estimates the parameters of the intensity function empirically. A fixed-point iteration procedure to obtain the maximum likelihood estimate is investigated for bias and conditions of existence. The main purpose of this model is to support inference in situations where failure data are few. A procedure for providing statistical confidence intervals is investigated and shown to be suitable for small sample sizes. An application of these techniques is illustrated by an example

    Warranty Data Analysis: A Review

    Get PDF
    Warranty claims and supplementary data contain useful information about product quality and reliability. Analysing such data can therefore be of benefit to manufacturers in identifying early warnings of abnormalities in their products, providing useful information about failure modes to aid design modification, estimating product reliability for deciding on warranty policy and forecasting future warranty claims needed for preparing fiscal plans. In the last two decades, considerable research has been conducted in warranty data analysis (WDA) from several different perspectives. This article attempts to summarise and review the research and developments in WDA with emphasis on models, methods and applications. It concludes with a brief discussion on current practices and possible future trends in WDA

    Statistical modelling of software reliability

    Get PDF
    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety

    Non-Stationary Random Process for Large-Scale Failure and Recovery of Power Distributions

    Full text link
    A key objective of the smart grid is to improve reliability of utility services to end users. This requires strengthening resilience of distribution networks that lie at the edge of the grid. However, distribution networks are exposed to external disturbances such as hurricanes and snow storms where electricity service to customers is disrupted repeatedly. External disturbances cause large-scale power failures that are neither well-understood, nor formulated rigorously, nor studied systematically. This work studies resilience of power distribution networks to large-scale disturbances in three aspects. First, a non-stationary random process is derived to characterize an entire life cycle of large-scale failure and recovery. Second, resilience is defined based on the non-stationary random process. Close form analytical expressions are derived under specific large-scale failure scenarios. Third, the non-stationary model and the resilience metric are applied to a real life example of large-scale disruptions due to Hurricane Ike. Real data on large-scale failures from an operational network is used to learn time-varying model parameters and resilience metrics.Comment: 11 pages, 8 figures, submitted to IEEE Sig. Pro

    Hybrid Software Reliability Model for Big Fault Data and Selection of Best Optimizer Using an Estimation Accuracy Function

    Get PDF
    Software reliability analysis has come to the forefront of academia as software applications have grown in size and complexity. Traditionally, methods have focused on minimizing coding errors to guarantee analytic tractability. This causes the estimations to be overly optimistic when using these models. However, it is important to take into account non-software factors, such as human error and hardware failure, in addition to software faults to get reliable estimations. In this research, we examine how big data systems' peculiarities and the need for specialized hardware led to the creation of a hybrid model. We used statistical and soft computing approaches to determine values for the model's parameters, and we explored five criteria values in an effort to identify the most useful method of parameter evaluation for big data systems. For this purpose, we conduct a case study analysis of software failure data from four actual projects. In order to do a comparison, we used the precision of the estimation function for the results. Particle swarm optimization was shown to be the most effective optimization method for the hybrid model constructed with the use of large-scale fault data

    A Comprehensive Analysis of Proportional Intensity-based Software Reliability Models with Covariates (New Developments on Mathematical Decision Making Under Uncertainty)

    Get PDF
    The black-box approach based on stochastic software reliability models is a simple methodology with only software fault data in order to describe the temporal behavior of fault-detection processes, but fails to incorporate some significant development metrics data observed in the development process. In this paper we develop proportional intensity-based software reliability models with time-dependent metrics, and propose a statistical framework to assess the software reliability with the timedependent covariate as well as the software fault data. The resulting models are similar to the usual proportional hazard model, but possess somewhat different covariate structure from the existing one. We compare these metricsbased software reliability models with eleven well-known non-homogeneous Poisson process models, which are the special cases of our models, and evaluate quantitatively the goodness-of-fit and prediction. As an important result, the accuracy on reliability assessment strongly depends on the kind of software metrics used for analysis and can be improved by incorporating the time-dependent metrics data in modeling
    • 

    corecore