3,248 research outputs found

    The survival signature for quantifying system reliability: an introductory overview from practical perspective

    Get PDF
    The structure function describes the functioning of a system dependent on the states of its components, and is central to theory of system reliability. The survival signature is a summary of the structure function which is sufficient to derive the system’s reliability function. Since its introduction in 2012, the survival signature has received much attention in the literature, with developments on theory, computation and generalizations. This paper presents an introductory overview of the survival signature, including some recent developments. We discuss challenges for practical use of survival signatures for large systems

    Minimal repair of failed components in coherent systems

    Get PDF
    The minimal repair replacement is a reasonable assumption in many practical systems. Under this assumption a failed component is replaced by another one whose reliability is the same as that of the component just before the failure, i.e., a used component with the same age. In this paper we study the minimal repair in coherent systems. We consider both the cases of independent and dependent components. Three replacement policies are studied. In the first one, the first failed component in the system is minimally repaired while, in the second one, we repair the component which causes the system failure. A new technique based on the relevation transform is used to compute the reliability of the systems obtained under these replacement policies. In the third case, we consider the replacement policy which assigns the minimal repair to a fixed component in the system. We compare these three options under different stochastic criteria and for different system structures. In particular, we provide the optimal strategy for all the coherent systems with 1-4 independent and identically distributed components

    System reliability when components can be swapped upon failure

    Get PDF
    Resilience of systems to failures during functioning is of great practical importance. One of the strategies that might be considered to enhance reliability and resilience of a system is swapping components when a component fails, thus replacing it by another component from the system that is still functioning. This thesis studies this scenario, particularly with the use of the survival signature concept to quantify system reliability, where it is assumed that such a swap of components requires these components to be of the same type. We examine the effect of swapping components on a reliability importance measure for the specific components, and we also consider the joint reliability importance of two components. Such swapping of components may be an attractive means toward more resilient systems and could be an alternative to adding more components to achieve redundancy of repair and replacement activities. Swapping components, if possible, is likely to incur some costs, for example for the actual swap or to prepare components to be able to take over functionality of another component. In this thesis we also consider the cost effectiveness of component swapping over a fixed period of time. It is assumed that a system needs to function for a given period of time, where failure to achieve this incurs a penalty cost. The expected costs when the different swap scenarios are applicable are compared with the option not to enable swaps. We also study the cost effectiveness of component swapping over an unlimited time horizon from the perspective of renewal theory. We assume that the system is entirely renewed upon failure, at a known cost, and we compare different swapping scenarios. The effect of components swapping on preventive replacement actions is also considered. In addition, we extend the approach of component swapping and the cost effectiveness analysis of component swapping to phased mission system. We consider two scenarios of swapping possibilities, namely, assuming that the possibilities of component swapping can occur at any time during the mission or only at transition of phases

    An efficient algorithm for computing exact system and survival signatures of K-terminal network reliability

    Get PDF
    An efficient algorithm is presented for computing exact system and survival signatures of K-terminal reliability in undirected networks with unreliable edges. K-terminal reliability is defined as the probability that a subset K of the network nodes can communicate with each other. Signatures have several advantages over direct reliability calculation such as enabling certain stochastic comparisons of reliability between competing network topology designs, extremely fast repeat computation of network reliability for different edge reliabilities and computation of network reliability when failures of edges are exchangeable but not independent. Existing methods for computation of signatures for K-terminal network reliability require derivation of cut-sets or path-sets which is only feasible for small networks due to the computational expense. The new algorithm utilises binary decision diagrams, boundary set partition sets and simple array operations to efficiently compute signatures through a factorisation of the network edges. The performance and advantages of the algorithm are demonstrated through application to a set of benchmark networks and a sensor network from an underground mine

    A Bayesian framework for verification and recalibration of ensemble forecasts: How uncertain is NAO predictability?

    Get PDF
    Predictability estimates of ensemble prediction systems are uncertain due to limited numbers of past forecasts and observations. To account for such uncertainty, this paper proposes a Bayesian inferential framework that provides a simple 6-parameter representation of ensemble forecasting systems and the corresponding observations. The framework is probabilistic, and thus allows for quantifying uncertainty in predictability measures such as correlation skill and signal-to-noise ratios. It also provides a natural way to produce recalibrated probabilistic predictions from uncalibrated ensembles forecasts. The framework is used to address important questions concerning the skill of winter hindcasts of the North Atlantic Oscillation for 1992-2011 issued by the Met Office GloSea5 climate prediction system. Although there is much uncertainty in the correlation between ensemble mean and observations, there is strong evidence of skill: the 95% credible interval of the correlation coefficient of [0.19,0.68] does not overlap zero. There is also strong evidence that the forecasts are not exchangeable with the observations: With over 99% certainty, the signal-to-noise ratio of the forecasts is smaller than the signal-to-noise ratio of the observations, which suggests that raw forecasts should not be taken as representative scenarios of the observations. Forecast recalibration is thus required, which can be coherently addressed within the proposed framework.Comment: 36 pages, 10 figure

    Variance Allocation and Shapley Value

    Full text link
    Motivated by the problem of utility allocation in a portfolio under a Markowitz mean-variance choice paradigm, we propose an allocation criterion for the variance of the sum of nn possibly dependent random variables. This criterion, the Shapley value, requires to translate the problem into a cooperative game. The Shapley value has nice properties, but, in general, is computationally demanding. The main result of this paper shows that in our particular case the Shapley value has a very simple form that can be easily computed. The same criterion is used also to allocate the standard deviation of the sum of nn random variables and a conjecture about the relation of the values in the two games is formulated.Comment: 20page

    Stochastic Precedence and Minima Among Dependent Variables

    Get PDF
    The notion of stochastic precedence between two random variables emerges as a relevant concept in several fields of applied probability. When one consider a vector of random variables X1,..,Xn, this notion has a preeminent role in the analysis of minima of the type minj∈AXj for A ⊂{1,…n}. In such an analysis, however, several apparently controversial aspects can arise (among which phenomena of “non-transitivity”). Here we concentrate attention on vectors of non-negative random variables with absolutely continuous joint distributions, in which a case the set of the multivariate conditional hazard rate (m.c.h.r.) functions can be employed as a convenient method to describe different aspects of stochastic dependence. In terms of the m.c.h.r. functions, we first obtain convenient formulas for the probability distributions of the variables minj∈AXj and for the probability of events {Xi=minj∈AXj}. Then we detail several aspects of the notion of stochastic precedence. On these bases, we explain some controversial behavior of such variables and give sufficient conditions under which paradoxical aspects can be excluded. On the purpose of stimulating active interest of readers, we present several comments and pertinent examples

    The Target-Based Utility Model. The role of Copulas and of Non-Additive Measures

    Get PDF
    My studies and my Ph.D. thesis deal with topics that recently emerged in the field of decisions under risk and uncertainty. In particular, I deal with the "target-based approach" to utility theory. A rich literature has been devoted in the last decade to this approach to economic decisions: originally, interest had been focused on the "single-attribute" case and, more recently, extensions to "multi-attribute" case have been studied. This literature is still growing, with a main focus on applied aspects. I will, on the contrary, focus attention on some aspects of theoretical type, related with the multi-attribute case. Various mathematical concepts, such as non-additive measures, aggregation functions, multivariate probability distributions, and notions of stochastic dependence emerge in the formulation and the analysis of target-based models. Notions in the field of non-additive measures and aggregation functions are quite common in the modern economic literature. They have been used to go beyond the classical principle of maximization of expected utility in decision theory. These notions, furthermore, are used in game theory and multi-criteria decision aid. Along my work, on the contrary, I show how non-additive measures and aggregation functions emerge in a natural way in the frame of the target-based approach to classical utility theory, when considering the multi-attribute case. Furthermore they combine with the analysis of multivariate probability distributions and with concepts of stochastic dependence. The concept of copula also constitutes a very important tool for this work, mainly for two purposes. The first one is linked to the analysis of target-based utilities, the other one is in the comparison between classical stochastic order and the concept of "stochastic precedence". This topic finds its application in statistics as well as in the study of Markov Models linked to waiting times to occurrences of words in random sampling of letters from an alphabet. In this work I give a generalization of the concept of stochastic precedence and we discuss its properties on the basis of properties of the connecting copulas of the variables. Along this work I also trace connections to reliability theory, whose aim is studying the lifetime of a system through the analysis of the lifetime of its components. The target-based model finds an application in representing the behavior of the whole system by means of the interaction of its components
    corecore