226 research outputs found

    On a method for mending time to failure distributions

    Get PDF
    Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all finite failures non-homogeneous Poisson process (NHPP) models share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models. --software reliability growth model,non-homogeneous Poisson process,defective distribution,(mean) time to failure,model unification

    Statistical modelling of software reliability

    Get PDF
    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety

    Hybrid Software Reliability Model for Big Fault Data and Selection of Best Optimizer Using an Estimation Accuracy Function

    Get PDF
    Software reliability analysis has come to the forefront of academia as software applications have grown in size and complexity. Traditionally, methods have focused on minimizing coding errors to guarantee analytic tractability. This causes the estimations to be overly optimistic when using these models. However, it is important to take into account non-software factors, such as human error and hardware failure, in addition to software faults to get reliable estimations. In this research, we examine how big data systems' peculiarities and the need for specialized hardware led to the creation of a hybrid model. We used statistical and soft computing approaches to determine values for the model's parameters, and we explored five criteria values in an effort to identify the most useful method of parameter evaluation for big data systems. For this purpose, we conduct a case study analysis of software failure data from four actual projects. In order to do a comparison, we used the precision of the estimation function for the results. Particle swarm optimization was shown to be the most effective optimization method for the hybrid model constructed with the use of large-scale fault data

    Optimal test case selection for multi-component software system

    Get PDF
    The omnipresence of software has forced upon the industry to produce efficient software in a short time. These requirements can be met by code reusability and software testing. Code reusability is achieved by developing software as components/modules rather than a single block. Software coding teams are becoming large to satiate the need of massive requirements. Large teams could work easily if software is developed in a modular fashion. It would be pointless to have software that would crash often. Testing makes the software more reliable. Modularity and reliability is the need of the day. Testing is usually carried out using test cases that target a class of software faults or a specific module. Usage of different test cases has an idiosyncratic effect on the reliability of the software system. Proposed research develops a model to determine the optimal test case policy selection that considers a modular software system with specific test cases in a stipulated testing time. The proposed model, models the failure behavior of each component using a conditional NHPP (Non-homogeneous Poisson process) and the interactions of the components by a CTMC (continuous time Markov chain). The initial number of bugs and the bug detection rate are known distributions. Dynamic programming is used as a tool in determining the optimal test case policy. The complete model is simulated using Matlab. The Markov decision process is computationally intensive but the implementation of the algorithm is meticulously optimized to eliminate repeat calculations. This has saved roughly 25-40% in processing time for different variations of the problem

    Statistical procedures for certification of software systems

    Get PDF

    A New Stochastic Model for Systems Under General Repairs

    Get PDF
    Numerous stochastic models for repairable systems have been developed by assuming different time trends, and re- pair effects. In this paper, a new general repair model based on the repair history is presented. Unlike the existing models, the closed- form solutions of the reliability metrics can be derived analytically by solving a set of differential equations. Consequently, the con- fidence bounds of these metrics can be easily estimated. The pro- posed model, as well as the estimation approach, overcomes the drawbacks of the existing models. The practical use of the proposed model is demonstrated by a much-discussed set of data. Compared to the existing models, the new model is convenient, and provides accurate estimation results

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Non-Stationary Random Process for Large-Scale Failure and Recovery of Power Distributions

    Full text link
    A key objective of the smart grid is to improve reliability of utility services to end users. This requires strengthening resilience of distribution networks that lie at the edge of the grid. However, distribution networks are exposed to external disturbances such as hurricanes and snow storms where electricity service to customers is disrupted repeatedly. External disturbances cause large-scale power failures that are neither well-understood, nor formulated rigorously, nor studied systematically. This work studies resilience of power distribution networks to large-scale disturbances in three aspects. First, a non-stationary random process is derived to characterize an entire life cycle of large-scale failure and recovery. Second, resilience is defined based on the non-stationary random process. Close form analytical expressions are derived under specific large-scale failure scenarios. Third, the non-stationary model and the resilience metric are applied to a real life example of large-scale disruptions due to Hurricane Ike. Real data on large-scale failures from an operational network is used to learn time-varying model parameters and resilience metrics.Comment: 11 pages, 8 figures, submitted to IEEE Sig. Pro

    Comparison Of Statistical Failure Models To Support Sewer System Operation

    Full text link
    Currently, achieving appropriate operative performance of water infrastructure has become a high priority in urbanized areas. Particularly, providing reliable sewerage service is central for human well-being and its development (Kleidorfer, et al. 2013). Having that wastewater system management is an increasingly complex task due to a number of hardly predictable factors (e.g. deterioration of system components and climate variability), recent research efforts have been focusing on developing methods to identify optimum proactive rehabilitation and maintenance strategies, some of which are based on the identification of the sewerage structures in most need of attention. To meet such a goal, different forecast failure models for urban water infrastructure have been recently developed. These models are able to assess the future behavior of water supply and sewer system structures. This study presents the comparison of two different failure statistical packages for urban water systems: (a) The FAIL software that calculates failure predictions based on two alternative stochastic processes, the single-variate Poisson process and the Linear Extended Yule process (LEYP) (see Martins et al., 2013) and (b) The SIMA software that, trough out a series of statistical tests, selects a failure model that is based either on an homogeneous Poisson process (HPP), a renewal process or a non-homogeneous Poisson process (NHPP), which allows changes of trend in the failure intensity (see Rodríguez, et al. 2012). Those different statistical models are applied to two contrasting urban wastewater systems: Bogotá (Colombia, 7.5 million inhabitants) and Oeiras e Amadora (Portugal, 10.000 inhabitants). Customer complaints and failure databases were gathered in order to analyze two different types of sewer failures named sediment-related blockages and structural failures. Multiple analyses are carried out in order to assess the impact of sewer system characteristics, system complexity, spatial resolution and data availability onto models forecasting efficiency
    corecore