62 research outputs found

    Statistical modeling and reliability analysis for multi-component systems with dependent failures

    Get PDF
    Reliability analysis of systems based on component reliability models has drawn the great interest of many researchers so far, as one of the fundamental aspects of reliability assessment issues. In particular, reliability analysis considering dependent failure occurrences of system components is important because the components may fail mutually due to sharing workloads such as heat, tasks and so on. In such a situation, we are liable to incorrectly estimate the reliability of the system unless we consider the possibility of the dependent failure occurrence phenomena. Thus, there are many publications about this topic in the literature. Most of the existing studies deal with the dependent failure between any two components in a multi-component system since its mathematical formulation is comparatively easy. However, the dependent failure may occur among two or more components in actual cases.In this thesis, we aim at developing reliability analysis techniques when several components of a system break down dependently. First, we newly formulate a reliability model of systems with the dependent failure by using a multivariate Farlie-Gumbel-Morgenstern (FGM) copula. Based on the model, we investigate the effect of the dependent failure occurrence on the system\u27s reliability. Secondly, we deal with the parameter estimation for the model in order to evaluate the dependence among the components by using their failure times. To do so, we propose a useful estimation algorithm for the multivariate FGM copula. In addition, we theoretically reveal the asymptotic normality of the proposed estimators and numerically investigate the estimation accuracy. Finally, we present a new method for the detection of the dependent failure occurrence in an n-component parallel system. These results are helpful to both quantitative and qualitative reliability assessment of the system under the possibility of the dependent failure occurrences. Also, our estimation method is especially applicable not only the reliability analysis but also other research fields.博士(工学)法政大学 (Hosei University

    Achieving Efficiency in Black Box Simulation of Distribution Tails with Self-structuring Importance Samplers

    Full text link
    Motivated by the increasing adoption of models which facilitate greater automation in risk management and decision-making, this paper presents a novel Importance Sampling (IS) scheme for measuring distribution tails of objectives modelled with enabling tools such as feature-based decision rules, mixed integer linear programs, deep neural networks, etc. Conventional efficient IS approaches suffer from feasibility and scalability concerns due to the need to intricately tailor the sampler to the underlying probability distribution and the objective. This challenge is overcome in the proposed black-box scheme by automating the selection of an effective IS distribution with a transformation that implicitly learns and replicates the concentration properties observed in less rare samples. This novel approach is guided by a large deviations principle that brings out the phenomenon of self-similarity of optimal IS distributions. The proposed sampler is the first to attain asymptotically optimal variance reduction across a spectrum of multivariate distributions despite being oblivious to the underlying structure. The large deviations principle additionally results in new distribution tail asymptotics capable of yielding operational insights. The applicability is illustrated by considering product distribution networks and portfolio credit risk models informed by neural networks as examples.Comment: 51 page

    ISBIS 2016: Meeting on Statistics in Business and Industry

    Get PDF
    This Book includes the abstracts of the talks presented at the 2016 International Symposium on Business and Industrial Statistics, held at Barcelona, June 8-10, 2016, hosted at the Universitat Politècnica de Catalunya - Barcelona TECH, by the Department of Statistics and Operations Research. The location of the meeting was at ETSEIB Building (Escola Tecnica Superior d'Enginyeria Industrial) at Avda Diagonal 647. The meeting organizers celebrated the continued success of ISBIS and ENBIS society, and the meeting draw together the international community of statisticians, both academics and industry professionals, who share the goal of making statistics the foundation for decision making in business and related applications. The Scientific Program Committee was constituted by: David Banks, Duke University Amílcar Oliveira, DCeT - Universidade Aberta and CEAUL Teresa A. Oliveira, DCeT - Universidade Aberta and CEAUL Nalini Ravishankar, University of Connecticut Xavier Tort Martorell, Universitat Politécnica de Catalunya, Barcelona TECH Martina Vandebroek, KU Leuven Vincenzo Esposito Vinzi, ESSEC Business Schoo

    Efficient Estimation of Stochastic Flow Network Reliability

    Get PDF
    International audienc

    Maintenance Strategy Choice Supported by the Failure Rate Function: Application in a Serial Manufacturing Line

    Get PDF
    The purpose of this article is to choose a maintenance procedure for the critical equipment of a forging production line with five machines. The research method is quantitative modelling and simulation. The main research technique includes retrieving time between failure and time to repair data and find the most likely distribution that has produced the data. The most likely failure rate function helps to define the maintenance strategy. The study includes two kinds of maintenance policies, reactive and anticipatory. Reactive policies include emergency and corrective procedures. Anticipatory policies include predictive and preventive ones combined with a total productive maintenance management approach. The most suitable combination for the first three machines is emergency and corrective choice. For the other machines, a combination of total productive maintenance and a predictive approach is optimal. The study encompasses the case of a serial production manufacturing line and maximum likelihood estimation. The failure rate function defines a combination of strategies for each machine. In addition, the study calculates the individual and systemic mean time to failure, mean time to repair, availability, and the most likely number of failures per production order, which follows a Poisson process. The main contribution of the article is a structured method to help define maintenance choices for critical equipment based on empirical data

    Network reliability, performability metrics, rare events and standard Monte Carlo

    Get PDF
    International audienceIn this paper we consider static models in network reliability, that cover a huge family of applications, going way beyond the case of networks of any kind. The analysis of these models is in general #P-complete, and Monte Carlo remains the only effective approach. We underline the interest in moving from the typical binary world where components and systems are either up or down, to a multi-variate one, where the up state is decomposed into several performance levels. This is also called a performability view of the system. The chapter then proposes a different view of Monte Carlo procedures, where instead of trying to reduce the variance of the estimators, we focus on their time complexities. This view allows a first straightforward way of exploring these metrics. The chapter focuses on the resilience, which is the expected number of pairs of nodes that are connected by at least one path in the model. We discuss the ability of the mentioned approach for quickly estimating this metric, together with variations of it. We also discuss another side effect of the sampling technique proposed in the text, the possibility of easily computing the sensitivities of these metrics with respect to the individual reliabilities of the components. We show that this can be done without a significant overhead of the procedure that estimates the resilience metric alone

    On the Reliability Estimation of Stochastic Binary System

    Get PDF
    A stochastic binary system is a multi-component on-off system subject to random independent failures on its components. After potential failures, the state of the subsystem is ruled by a logical function (called structure function) that determines whether the system is operational or not. Stochastic binary systems (SBS) serve as a natural generalization of network reliability analysis, where the goal is to find the probability of correct operation of the system (in terms of connectivity, network diameter or different measures of success). A particular subclass of interest is stochastic monotone binary systems (SMBS), which are characterized by non-decreasing structure. We explore the combinatorics of SBS, which provide building blocks for system reliability estimation, looking at minimal non-operational subsystems, called mincuts. One key concept to understand the underlying combinatorics of SBS is duality. As methods for exact evaluation take exponential time, we discuss the use of Monte Carlo algorithms. In particular, we discuss the F-Monte Carlo method for estimating the reliability polynomial for homogeneous SBS, the Recursive Variance Reduction (RVR) for SMBS, which builds upon the efficient determination of mincuts, and three additional methods that combine in different ways the well--known techniques of Permutation Monte Carlo and Splitting. These last three methods are based on a stochastic process called Creation Process, a temporal evolution of the SBS which is static by definition. All the methods are compared using different topologies, showing large efficiency gains over the basic Monte Carlo scheme.Agencia Nacional de InvestigaciĂłn e InnovaciĂłnMath-AMSU

    Probabilistic Modeling of Process Systems with Application to Risk Assessment and Fault Detection

    Get PDF
    Three new methods of joint probability estimation (modeling), a maximum-likelihood maximum-entropy method, a constrained maximum-entropy method, and a copula-based method called the rolling pin (RP) method, were developed. Compared to many existing probabilistic modeling methods such as Bayesian networks and copulas, the developed methods yield models that have better performance in terms of flexibility, interpretability and computational tractability. These methods can be used readily to model process systems and perform risk analysis and fault detection at steady state conditions, and can be coupled with appropriate mathematical tools to develop dynamic probabilistic models. Also, a method of performing probabilistic inference using RP-estimated joint probability distributions was introduced; this method is superior to Bayesian networks in several aspects. The RP method was also applied successfully to identify regression models that have high level of flexibility and are appealing in terms of computational costs.Ph.D., Chemical Engineering -- Drexel University, 201
    • …
    corecore