1,833 research outputs found

    Optimization of mesh hierarchies in Multilevel Monte Carlo samplers

    Full text link
    We perform a general optimization of the parameters in the Multilevel Monte Carlo (MLMC) discretization hierarchy based on uniform discretization methods with general approximation orders and computational costs. We optimize hierarchies with geometric and non-geometric sequences of mesh sizes and show that geometric hierarchies, when optimized, are nearly optimal and have the same asymptotic computational complexity as non-geometric optimal hierarchies. We discuss how enforcing constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. These constraints include an upper and a lower bound on the mesh size or enforcing that the number of samples and the number of discretization elements are integers. We also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. To provide numerical grounds for our theoretical results, we apply these optimized hierarchies together with the Continuation MLMC Algorithm. The first example considers a three-dimensional elliptic partial differential equation with random inputs. Its space discretization is based on continuous piecewise trilinear finite elements and the corresponding linear system is solved by either a direct or an iterative solver. The second example considers a one-dimensional It\^o stochastic differential equation discretized by a Milstein scheme

    Discrepancy Bounds for Mixed Sequences

    Get PDF
    A mixed sequence is a sequence in the ss-dimensional unit cube which one obtains by concatenating a dd-dimensional low-discrepancy sequence with an s−ds-d-dimensional random sequence. We discuss some probabilistic bounds on the star discrepancy of mixed sequences

    Calculation of aggregate loss distributions

    Full text link
    Estimation of the operational risk capital under the Loss Distribution Approach requires evaluation of aggregate (compound) loss distributions which is one of the classic problems in risk theory. Closed-form solutions are not available for the distributions typically used in operational risk. However with modern computer processing power, these distributions can be calculated virtually exactly using numerical methods. This paper reviews numerical algorithms that can be successfully used to calculate the aggregate loss distributions. In particular Monte Carlo, Panjer recursion and Fourier transformation methods are presented and compared. Also, several closed-form approximations based on moment matching and asymptotic result for heavy-tailed distributions are reviewed

    Multilevel Monte Carlo methods for applications in finance

    Full text link
    Since Giles introduced the multilevel Monte Carlo path simulation method [18], there has been rapid development of the technique for a variety of applications in computational finance. This paper surveys the progress so far, highlights the key features in achieving a high rate of multilevel variance convergence, and suggests directions for future research.Comment: arXiv admin note: text overlap with arXiv:1202.6283; and with arXiv:1106.4730 by other author

    Multilevel Richardson-Romberg and Importance Sampling in Derivative Pricing

    Full text link
    In this paper, we propose and analyze a novel combination of multilevel Richardson-Romberg (ML2R) and importance sampling algorithm, with the aim of reducing the overall computational time, while achieving desired root-mean-squared error while pricing. We develop an idea to construct the Monte-Carlo estimator that deals with the parametric change of measure. We rely on the Robbins-Monro algorithm with projection, in order to approximate optimal change of measure parameter, for various levels of resolution in our multilevel algorithm. Furthermore, we propose incorporating discretization schemes with higher-order strong convergence, in order to simulate the underlying stochastic differential equations (SDEs) thereby achieving better accuracy. In order to do so, we study the Central Limit Theorem for the general multilevel algorithm. Further, we study the asymptotic behavior of our estimator, thereby proving the Strong Law of Large Numbers. Finally, we present numerical results to substantiate the efficacy of our developed algorithm

    A Continuation Multilevel Monte Carlo algorithm

    Full text link
    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding weak and strong errors. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients
    • …
    corecore