592 research outputs found

    Estimating Discrete Markov Models From Various Incomplete Data Schemes

    Full text link
    The parameters of a discrete stationary Markov model are transition probabilities between states. Traditionally, data consist in sequences of observed states for a given number of individuals over the whole observation period. In such a case, the estimation of transition probabilities is straightforwardly made by counting one-step moves from a given state to another. In many real-life problems, however, the inference is much more difficult as state sequences are not fully observed, namely the state of each individual is known only for some given values of the time variable. A review of the problem is given, focusing on Monte Carlo Markov Chain (MCMC) algorithms to perform Bayesian inference and evaluate posterior distributions of the transition probabilities in this missing-data framework. Leaning on the dependence between the rows of the transition matrix, an adaptive MCMC mechanism accelerating the classical Metropolis-Hastings algorithm is then proposed and empirically studied.Comment: 26 pages - preprint accepted in 20th February 2012 for publication in Computational Statistics and Data Analysis (please cite the journal's paper

    Generalized Marshall-Olkin Distributions, and Related Bivariate Aging Properties

    Get PDF
    National Natural Science Foundation of China [10771090]A class of generalized bivariate Marshall-Olkin distributions, which includes as special cases the Marshall-Olkin bivariate exponential distribution and the Marshall-Olkin type distribution due to Muliere and Scarsini (1987) [19] are examined in this paper. Stochastic comparison results are derived, and bivariate aging properties, together with properties related to evolution of dependence along time, are investigated for this class of distributions. Extensions of results previously presented in the literature are provided as well. (C) 2011 Elsevier Inc. All rights reserved

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years

    A Computational Framework for Efficient Reliability Analysis of Complex Networks

    Get PDF
    With the growing scale and complexity of modern infrastructure networks comes the challenge of developing efficient and dependable methods for analysing their reliability. Special attention must be given to potential network interdependencies as disregarding these can lead to catastrophic failures. Furthermore, it is of paramount importance to properly treat all uncertainties. The survival signature is a recent development built to effectively analyse complex networks that far exceeds standard techniques in several important areas. Its most distinguishing feature is the complete separation of system structure from probabilistic information. Because of this, it is possible to take into account a variety of component failure phenomena such as dependencies, common causes of failure, and imprecise probabilities without reevaluating the network structure. This cumulative dissertation presents several key improvements to the survival signature ecosystem focused on the structural evaluation of the system as well as the modelling of component failures. A new method is presented in which (inter)-dependencies between components and networks are modelled using vine copulas. Furthermore, aleatory and epistemic uncertainties are included by applying probability boxes and imprecise copulas. By leveraging the large number of available copula families it is possible to account for varying dependent effects. The graph-based design of vine copulas synergizes well with the typical descriptions of network topologies. The proposed method is tested on a challenging scenario using the IEEE reliability test system, demonstrating its usefulness and emphasizing the ability to represent complicated scenarios with a range of dependent failure modes. The numerical effort required to analytically compute the survival signature is prohibitive for large complex systems. This work presents two methods for the approximation of the survival signature. In the first approach system configurations of low interest are excluded using percolation theory, while the remaining parts of the signature are estimated by Monte Carlo simulation. The method is able to accurately approximate the survival signature with very small errors while drastically reducing computational demand. Several simple test systems, as well as two real-world situations, are used to show the accuracy and performance. However, with increasing network size and complexity this technique also reaches its limits. A second method is presented where the numerical demand is further reduced. Here, instead of approximating the whole survival signature only a few strategically selected values are computed using Monte Carlo simulation and used to build a surrogate model based on normalized radial basis functions. The uncertainty resulting from the approximation of the data points is then propagated through an interval predictor model which estimates bounds for the remaining survival signature values. This imprecise model provides bounds on the survival signature and therefore the network reliability. Because a few data points are sufficient to build the interval predictor model it allows for even larger systems to be analysed. With the rising complexity of not just the system but also the individual components themselves comes the need for the components to be modelled as subsystems in a system-of-systems approach. A study is presented, where a previously developed framework for resilience decision-making is adapted to multidimensional scenarios in which the subsystems are represented as survival signatures. The survival signature of the subsystems can be computed ahead of the resilience analysis due to the inherent separation of structural information. This enables efficient analysis in which the failure rates of subsystems for various resilience-enhancing endowments are calculated directly from the survival function without reevaluating the system structure. In addition to the advancements in the field of survival signature, this work also presents a new framework for uncertainty quantification developed as a package in the Julia programming language called UncertaintyQuantification.jl. Julia is a modern high-level dynamic programming language that is ideal for applications such as data analysis and scientific computing. UncertaintyQuantification.jl was built from the ground up to be generalised and versatile while remaining simple to use. The framework is in constant development and its goal is to become a toolbox encompassing state-of-the-art algorithms from all fields of uncertainty quantification and to serve as a valuable tool for both research and industry. UncertaintyQuantification.jl currently includes simulation-based reliability analysis utilising a wide range of sampling schemes, local and global sensitivity analysis, and surrogate modelling methodologies

    Systemic Weather Risk and Crop Insurance: The Case of China

    Get PDF
    The supply of affordable crop insurance is hampered by the existence of systemic weather risk which results in large risk premiums. In this article, we assess the systemic nature of weather risk for 17 agricultural production regions in China and explore the possibility of spatial diversification of this risk. We simulate the buffer load of hypothetical temperature-based insurance and investigate the relation between the size of the buffer load and the size of the trading area of the insurance. The analysis makes use of a hierarchical Archimedean copula approach (HAC) which allows flexible modeling of the joint loss distribution and reveals the dependence structure of losses in different insured regions. Our results show a significant decrease of the required risk loading when the insured area expands. Nevertheless, a considerable part of undiversifiable risk remains with the insurer. We find that the spatial diversification effect depends on the type of the weather index and the strike level of the insurance. Our findings are relevant for insurers and insurance regulators as they shed light on the viability of private crop insurance in China.crop insurance, systemic weather risk, hierarchical Archimedean copulas

    Comparison results for inactivity times of k-out-of-n and general coherent systems with dependent components

    Get PDF
    Coherent systems, i.e., multicomponent systems where every component monotonically affects the working state or failure of the whole system, are among the main objects of study in reliability analysis. Consider a coherent system with possibly dependent components having lifetime T , and assume we know that it failed before a given time t > 0. Its inactivity time t −T can be evaluated under different conditional events. In fact, one might just know that the system has failed and then consider the inactivity time (t − T |T ≤ t), or one may also know which ones of the components have failed before time t, and then consider the corresponding system’s inactivity time under this condition. For all these cases, we obtain a representation of the reliability function of system inactivity time based on the recently defined notion of distortion functions. Making use of these representations, new stochastic comparison results for inactivity times of systems under the different conditional events are provided. These results can also be applied to order statistics which can be seen as particular cases of coherent systems (k-out-of-n systems, i.e., systems which work when at least k of their n components work)

    Bayesian Network Approach to Assessing System Reliability for Improving System Design and Optimizing System Maintenance

    Get PDF
    abstract: A quantitative analysis of a system that has a complex reliability structure always involves considerable challenges. This dissertation mainly addresses uncertainty in- herent in complicated reliability structures that may cause unexpected and undesired results. The reliability structure uncertainty cannot be handled by the traditional relia- bility analysis tools such as Fault Tree and Reliability Block Diagram due to their deterministic Boolean logic. Therefore, I employ Bayesian network that provides a flexible modeling method for building a multivariate distribution. By representing a system reliability structure as a joint distribution, the uncertainty and correlations existing between system’s elements can effectively be modeled in a probabilistic man- ner. This dissertation focuses on analyzing system reliability for the entire system life cycle, particularly, production stage and early design stages. In production stage, the research investigates a system that is continuously mon- itored by on-board sensors. With modeling the complex reliability structure by Bayesian network integrated with various stochastic processes, I propose several methodologies that evaluate system reliability on real-time basis and optimize main- tenance schedules. In early design stages, the research aims to predict system reliability based on the current system design and to improve the design if necessary. The three main challenges in this research are: 1) the lack of field failure data, 2) the complex reliability structure and 3) how to effectively improve the design. To tackle the difficulties, I present several modeling approaches using Bayesian inference and nonparametric Bayesian network where the system is explicitly analyzed through the sensitivity analysis. In addition, this modeling approach is enhanced by incorporating a temporal dimension. However, the nonparametric Bayesian network approach generally accompanies with high computational efforts, especially, when a complex and large system is modeled. To alleviate this computational burden, I also suggest to building a surrogate model with quantile regression. In summary, this dissertation studies and explores the use of Bayesian network in analyzing complex systems. All proposed methodologies are demonstrated by case studies.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201
    corecore