1,954 research outputs found

    On a method for mending time to failure distributions

    Get PDF
    Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all finite failures non-homogeneous Poisson process (NHPP) models share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models. --software reliability growth model,non-homogeneous Poisson process,defective distribution,(mean) time to failure,model unification

    Optimal test case selection for multi-component software system

    Get PDF
    The omnipresence of software has forced upon the industry to produce efficient software in a short time. These requirements can be met by code reusability and software testing. Code reusability is achieved by developing software as components/modules rather than a single block. Software coding teams are becoming large to satiate the need of massive requirements. Large teams could work easily if software is developed in a modular fashion. It would be pointless to have software that would crash often. Testing makes the software more reliable. Modularity and reliability is the need of the day. Testing is usually carried out using test cases that target a class of software faults or a specific module. Usage of different test cases has an idiosyncratic effect on the reliability of the software system. Proposed research develops a model to determine the optimal test case policy selection that considers a modular software system with specific test cases in a stipulated testing time. The proposed model, models the failure behavior of each component using a conditional NHPP (Non-homogeneous Poisson process) and the interactions of the components by a CTMC (continuous time Markov chain). The initial number of bugs and the bug detection rate are known distributions. Dynamic programming is used as a tool in determining the optimal test case policy. The complete model is simulated using Matlab. The Markov decision process is computationally intensive but the implementation of the algorithm is meticulously optimized to eliminate repeat calculations. This has saved roughly 25-40% in processing time for different variations of the problem

    Analysis of costs and delivery intervals for multiple-release software

    Get PDF
    Project managers of large software projects, and particularly those associated with Internet Business-to-Business (B2B) or Business-to-Customer (B2C) applications, are under pressure to capture market share by delivering reliable software with cost and timing constraints. An earlier delivery time may help the E-commerce software capture a larger market share. However, early delivery sometimes means lower quality. In addition, most of the time the scale of the software is so large that incremental multiple releases are warranted. A Multiple-Release methodology has been developed to optimize the most efficient and effective delivery intervals of the various releases of software products, taking into consideration software costs and reliability. The Multiple-Release methodology extends existing software cost and reliability models, meets the needs of large software development firms, and gives a navigation guide to software industrial managers. The main decision factors for the multiple releases include the delivery interval of each release, the market value of the features in the release, and the software costs associated with testing and error penalties. The input of these factors was assessing using Design of Experiments (DOE). The costs included in the research are based on a salary survey of software staff at companies in the New Jersey area and on budgets of software development teams. The Activity Based Cost (ABC) method was used to determine costs on the basis of job functions associated with the development of the software. It is assumed that the error data behavior follows the Non-Homogeneous Poisson Processes (NHPP)

    Transient Reward Approximation for Continuous-Time Markov Chains

    Full text link
    We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.Comment: Accepted for publication in IEEE Transactions on Reliabilit

    Software Reliability Modeling

    No full text
    International audienceSoftware Reliability Modelin

    Methodologies synthesis

    Get PDF
    This deliverable deals with the modelling and analysis of interdependencies between critical infrastructures, focussing attention on two interdependent infrastructures studied in the context of CRUTIAL: the electric power infrastructure and the information infrastructures supporting management, control and maintenance functionality. The main objectives are: 1) investigate the main challenges to be addressed for the analysis and modelling of interdependencies, 2) review the modelling methodologies and tools that can be used to address these challenges and support the evaluation of the impact of interdependencies on the dependability and resilience of the service delivered to the users, and 3) present the preliminary directions investigated so far by the CRUTIAL consortium for describing and modelling interdependencies

    Reliability models and analyses of the computing systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Online failure prediction in air traffic control systems

    Get PDF
    This thesis introduces a novel approach to online failure prediction for mission critical distributed systems that has the distinctive features to be black-box, non-intrusive and online. The approach combines Complex Event Processing (CEP) and Hidden Markov Models (HMM) so as to analyze symptoms of failures that might occur in the form of anomalous conditions of performance metrics identified for such purpose. The thesis presents an architecture named CASPER, based on CEP and HMM, that relies on sniffed information from the communication network of a mission critical system, only, for predicting anomalies that can lead to software failures. An instance of Casper has been implemented, trained and tuned to monitor a real Air Traffic Control (ATC) system developed by Selex ES, a Finmeccanica Company. An extensive experimental evaluation of CASPER is presented. The obtained results show (i) a very low percentage of false positives over both normal and under stress conditions, and (ii) a sufficiently high failure prediction time that allows the system to apply appropriate recovery procedures
    corecore