118,856 research outputs found

    Chance-Constrained Outage Scheduling using a Machine Learning Proxy

    Full text link
    Outage scheduling aims at defining, over a horizon of several months to years, when different components needing maintenance should be taken out of operation. Its objective is to minimize operation-cost expectation while satisfying reliability-related constraints. We propose a distributed scenario-based chance-constrained optimization formulation for this problem. To tackle tractability issues arising in large networks, we use machine learning to build a proxy for predicting outcomes of power system operation processes in this context. On the IEEE-RTS79 and IEEE-RTS96 networks, our solution obtains cheaper and more reliable plans than other candidates

    Robust and Efficient Uncertainty Quantification and Validation of RFIC Isolation

    Get PDF
    Modern communication and identification products impose demanding constraints on reliability of components. Due to this statistical constraints more and more enter optimization formulations of electronic products. Yield constraints often require efficient sampling techniques to obtain uncertainty quantification also at the tails of the distributions. These sampling techniques should outperform standard Monte Carlo techniques, since these latter ones are normally not efficient enough to deal with tail probabilities. One such a technique, Importance Sampling, has successfully been applied to optimize Static Random Access Memories (SRAMs) while guaranteeing very small failure probabilities, even going beyond 6-sigma variations of parameters involved. Apart from this, emerging uncertainty quantifications techniques offer expansions of the solution that serve as a response surface facility when doing statistics and optimization. To efficiently derive the coefficients in the expansions one either has to solve a large number of problems or a huge combined problem. Here parameterized Model Order Reduction (MOR) techniques can be used to reduce the work load. To also reduce the amount of parameters we identify those that only affect the variance in a minor way. These parameters can simply be set to a fixed value. The remaining parameters can be viewed as dominant. Preservation of the variation also allows to make statements about the approximation accuracy obtained by the parameter-reduced problem. This is illustrated on an RLC circuit. Additionally, the MOR technique used should not affect the variance significantly. Finally we consider a methodology for reliable RFIC isolation using floor-plan modeling and isolation grounding. Simulations show good comparison with measurements

    Nonparametric bootstrapping of the reliability function for multiple copies of a repairable item modeled by a birth process

    Get PDF
    Nonparametric bootstrap inference is developed for the reliability function estimated from censored, nonstationary failure time data for multiple copies of repairable items. We assume that each copy has a known, but not necessarily the same, observation period; and upon failure of one copy, design modifications are implemented for all copies operating at that time to prevent further failures arising from the same fault. This implies that, at any point in time, all operating copies will contain the same set of faults. Failures are modeled as a birth process because there is a reduction in the rate of occurrence at each failure. The data structure comprises a mix of deterministic and random censoring mechanisms corresponding to the known observation period of the copy, and the random censoring time of each fault. Hence, bootstrap confidence intervals and regions for the reliability function measure the length of time a fault can remain within the item until realization as failure in one of the copies. Explicit formulae derived for the re-sampling probabilities greatly reduce dependency on Monte-Carlo simulation. Investigations show a small bias arising in re-sampling that can be quantified and corrected. The variability generated by the re-sampling approach approximates the variability in the underlying birth process, and so supports appropriate inference. An illustrative example describes application to a problem, and discusses the validity of modeling assumptions within industrial practice

    Cross-entropy optimisation of importance sampling parameters for statistical model checking

    Get PDF
    Statistical model checking avoids the exponential growth of states associated with probabilistic model checking by estimating properties from multiple executions of a system and by giving results within confidence bounds. Rare properties are often very important but pose a particular challenge for simulation-based approaches, hence a key objective under these circumstances is to reduce the number and length of simulations necessary to produce a given level of confidence. Importance sampling is a well-established technique that achieves this, however to maintain the advantages of statistical model checking it is necessary to find good importance sampling distributions without considering the entire state space. Motivated by the above, we present a simple algorithm that uses the notion of cross-entropy to find the optimal parameters for an importance sampling distribution. In contrast to previous work, our algorithm uses a low dimensional vector of parameters to define this distribution and thus avoids the often intractable explicit representation of a transition matrix. We show that our parametrisation leads to a unique optimum and can produce many orders of magnitude improvement in simulation efficiency. We demonstrate the efficacy of our methodology by applying it to models from reliability engineering and biochemistry.Comment: 16 pages, 8 figures, LNCS styl

    Bayesian Subset Simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure

    Full text link
    The estimation of small probabilities of failure from computer simulations is a classical problem in engineering, and the Subset Simulation algorithm proposed by Au & Beck (Prob. Eng. Mech., 2001) has become one of the most popular method to solve it. Subset simulation has been shown to provide significant savings in the number of simulations to achieve a given accuracy of estimation, with respect to many other Monte Carlo approaches. The number of simulations remains still quite high however, and this method can be impractical for applications where an expensive-to-evaluate computer model is involved. We propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on kriging (also known as Gaussian process modeling). The performance of this new algorithm is illustrated using a test case from the literature. We are able to report promising results. In addition, we provide a numerical study of the statistical properties of the estimator.Comment: 11th International Probabilistic Assessment and Management Conference (PSAM11) and The Annual European Safety and Reliability Conference (ESREL 2012), Helsinki : Finland (2012

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years
    • …
    corecore