890,574 research outputs found

    Variance Reduction Techniques in Monte Carlo Methods

    Get PDF
    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the introduction of computers. This increased computer power has stimulated simulation analysts to develop ever more realistic models, so that the net result has not been faster execution of simulation experiments; e.g., some modern simulation models need hours or days for a single ’run’ (one replication of one scenario or combination of simulation input values). Moreover there are some simulation models that represent rare events which have extremely small probabilities of occurrence), so even modern computer would take ’for ever’ (centuries) to execute a single run - were it not that special VRT can reduce theses excessively long runtimes to practical magnitudes.common random numbers;antithetic random numbers;importance sampling;control variates;conditioning;stratied sampling;splitting;quasi Monte Carlo

    Stochastic Variance Reduction Methods for Saddle-Point Problems

    Get PDF
    We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which is common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monotone operators to prove convergence, showing in particular that the same algorithm applies to a larger class of problems, such as variational inequalities, (b) there are two notions of splits, in terms of functions, or in terms of partial derivatives, (c) the split does need to be done with convex-concave terms, (d) non-uniform sampling is key to an efficient algorithm, both in theory and practice, and (e) these incremental algorithms can be easily accelerated using a simple extension of the "catalyst" framework, leading to an algorithm which is always superior to accelerated batch algorithms.Comment: Neural Information Processing Systems (NIPS), 2016, Barcelona, Spai

    Some variance reduction methods for numerical stochastic homogenization

    Full text link
    We overview a series of recent works devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires solving a set of problems at the micro scale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte-Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behavior. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts of the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here

    American Options Based on Malliavin Calculus and Nonparametric Variance Reduction Methods

    Get PDF
    This paper is devoted to pricing American options using Monte Carlo and the Malliavin calculus. Unlike the majority of articles related to this topic, in this work we will not use localization fonctions to reduce the variance. Our method is based on expressing the conditional expectation E[f(St)/Ss] using the Malliavin calculus without localization. Then the variance of the estimator of E[f(St)/Ss] is reduced using closed formulas, techniques based on a conditioning and a judicious choice of the number of simulated paths. Finally, we perform the stopping times version of the dynamic programming algorithm to decrease the bias. On the one hand, we will develop the Malliavin calculus tools for exponential multi-dimensional diffusions that have deterministic and no constant coefficients. On the other hand, we will detail various nonparametric technics to reduce the variance. Moreover, we will test the numerical efficiency of our method on a heterogeneous CPU/GPU multi-core machine

    Variance Reduction Techniques in Monte Carlo Methods

    Get PDF
    • …
    corecore