8,959 research outputs found

    Variance Reduction Techniques in Monte Carlo Methods

    Get PDF
    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the introduction of computers. This increased computer power has stimulated simulation analysts to develop ever more realistic models, so that the net result has not been faster execution of simulation experiments; e.g., some modern simulation models need hours or days for a single ’run’ (one replication of one scenario or combination of simulation input values). Moreover there are some simulation models that represent rare events which have extremely small probabilities of occurrence), so even modern computer would take ’for ever’ (centuries) to execute a single run - were it not that special VRT can reduce theses excessively long runtimes to practical magnitudes.common random numbers;antithetic random numbers;importance sampling;control variates;conditioning;stratied sampling;splitting;quasi Monte Carlo

    Bounding rare event probabilities in computer experiments

    Full text link
    We are interested in bounding probabilities of rare events in the context of computer experiments. These rare events depend on the output of a physical model with random input variables. Since the model is only known through an expensive black box function, standard efficient Monte Carlo methods designed for rare events cannot be used. We then propose a strategy to deal with this difficulty based on importance sampling methods. This proposal relies on Kriging metamodeling and is able to achieve sharp upper confidence bounds on the rare event probabilities. The variability due to the Kriging metamodeling step is properly taken into account. The proposed methodology is applied to a toy example and compared to more standard Bayesian bounds. Finally, a challenging real case study is analyzed. It consists of finding an upper bound of the probability that the trajectory of an airborne load will collide with the aircraft that has released it.Comment: 21 pages, 6 figure

    A survey of rare event simulation methods for static input–output models

    Get PDF
    International audienceCrude Monte-Carlo or quasi Monte-Carlo methods are well suited to characterize events of which associated probabilities are not too low with respect to the simulation budget. For very seldom observed events, such as the collision probability between two aircraft in airspace, these approaches do not lead to accurate results. Indeed, the number of available samples is often insufficient to estimate such low probabilities (at least 10^6 samples are needed to estimate a probability of order 10^-4with 10% relative error with Monte-Carlo simulations). In this article, one reviewed different appropriate techniques to estimate rare event probabilities that require a fewer number of samples. These methods can be divided into four main categories: parameterization techniques of probability density function tails, simulation techniques such as importance sampling or importance splitting, geometric methods to approximate input failure space and finally, surrogate modeling. Each technique is detailed, its advantages and drawbacks are described and a synthesis that aims at giving some clues to the following question is given: “which technique to use for which problem?”

    Bayesian subset simulation

    Full text link
    We consider the problem of estimating a probability of failure α\alpha, defined as the volume of the excursion set of a function f:X⊆Rd→Rf:\mathbb{X} \subseteq \mathbb{R}^{d} \to \mathbb{R} above a given threshold, under a given probability measure on X\mathbb{X}. In this article, we combine the popular subset simulation algorithm (Au and Beck, Probab. Eng. Mech. 2001) and our sequential Bayesian approach for the estimation of a probability of failure (Bect, Ginsbourger, Li, Picheny and Vazquez, Stat. Comput. 2012). This makes it possible to estimate α\alpha when the number of evaluations of ff is very limited and α\alpha is very small. The resulting algorithm is called Bayesian subset simulation (BSS). A key idea, as in the subset simulation algorithm, is to estimate the probabilities of a sequence of excursion sets of ff above intermediate thresholds, using a sequential Monte Carlo (SMC) approach. A Gaussian process prior on ff is used to define the sequence of densities targeted by the SMC algorithm, and drive the selection of evaluation points of ff to estimate the intermediate probabilities. Adaptive procedures are proposed to determine the intermediate thresholds and the number of evaluations to be carried out at each stage of the algorithm. Numerical experiments illustrate that BSS achieves significant savings in the number of function evaluations with respect to other Monte Carlo approaches
    • …
    corecore