14,881 research outputs found

    On the Effect of Random Alternating Perturbations on Hazard Rates

    Full text link
    We consider a model for systems perturbed by dichotomous noise, in which the hazard rate function of a random lifetime is subject to additive time-alternating perturbations described by the telegraph process. This leads us to define a real-valued continuous-time stochastic process of alternating type expressed in terms of the integrated telegraph process for which we obtain the probability distribution, mean and variance. An application to survival analysis and reliability data sets based on confidence bands for estimated hazard rate functions is also provided.Comment: 14 pages, 6 figure

    Estimating the Probability of a Rare Event Over a Finite Time Horizon

    Get PDF
    We study an approximation for the zero-variance change of measure to estimate the probability of a rare event in a continuous-time Markov chain. The rare event occurs when the chain reaches a given set of states before some fixed time limit. The jump rates of the chain are expressed as functions of a rarity parameter in a way that the probability of the rare event goes to zero when the rarity parameter goes to zero, and the behavior of our estimators is studied in this asymptotic regime. After giving a general expression for the zero-variance change of measure in this situation, we develop an approximation of it via a power series and show that this approximation provides a bounded relative error when the rarity parameter goes to zero. We illustrate the performance of our approximation on small numerical examples of highly reliableMarkovian systems. We compare it to a previously proposed heuristic that combines forcing with balanced failure biaising. We also exhibit the exact zero-variance change of measure for these examples and compare it with these two approximations

    Transient Reward Approximation for Continuous-Time Markov Chains

    Full text link
    We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.Comment: Accepted for publication in IEEE Transactions on Reliabilit

    A functional central limit theorem for interacting particle systems on transitive graphs

    Full text link
    A finite range interacting particle system on a transitive graph is considered. Assuming that the dynamics and the initial measure are invariant, the normalized empirical distribution process converges in distribution to a centered diffusion process. As an application, a central limit theorem for certain hitting times, interpreted as failure times of a coherent system in reliability, is derived.Comment: 35 page

    Better than their reputation - A case for mail surveys in contingent valuation

    Get PDF
    Though contingent valuation is the dominant technique for the valuation of public projects, especially in the environmental sector, the high costs of contingent valuation surveys prevent the use of this method for the assessment of relatively small projects. The reason for this cost problem is that typically only contingent valuation studies which are based on face-to-face interviews are accepted as leading to valid results. Especially in countries with high wages face-to-face surveys are extremely costly considering that for a valid contingent valuation study a minimum of 1,000 completed face-to-face interviews is required. In this paper we try a rehabilitation of mail surveys as low-budget substitutes for costly face-to-face surveys. Based on an empirical contingent valuation study in Northern Thailand we show that the validity of mail surveys can be improved significantly if so-called citizen expert groups are employed for a thorough survey design.contingent valuation; Environmental Valuation; Equity

    Closed-form solutions of performability

    Get PDF
    Methods which yield closed form performability solutions for continuous valued variables are developed. The models are similar to those employed in performance modeling (i.e., Markovian queueing models) but are extended so as to account for variations in structure due to faults. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. To avoid known difficulties associated with exact transient solutions, an approximate decomposition of the model is employed permitting certain submodels to be solved in equilibrium. These solutions are then incorporated in a model with fewer transient states and by solving the latter, a closed form solution of the system's performability is obtained. In conclusion, some applications of this solution are discussed and illustrated, including an example of design optimization

    Exploiting Data Representation for Fault Tolerance

    Full text link
    We explore the link between data representation and soft errors in dot products. We present an analytic model for the absolute error introduced should a soft error corrupt a bit in an IEEE-754 floating-point number. We show how this finding relates to the fundamental linear algebra concepts of normalization and matrix equilibration. We present a case study illustrating that the probability of experiencing a large error in a dot product is minimized when both vectors are normalized. Furthermore, when data is normalized we show that the absolute error is less than one or very large, which allows us to detect large errors. We demonstrate how this finding can be used by instrumenting the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase, and show that when scaling is used the absolute error can be bounded above by one
    corecore