1,833 research outputs found
Optimization of mesh hierarchies in Multilevel Monte Carlo samplers
We perform a general optimization of the parameters in the Multilevel Monte
Carlo (MLMC) discretization hierarchy based on uniform discretization methods
with general approximation orders and computational costs. We optimize
hierarchies with geometric and non-geometric sequences of mesh sizes and show
that geometric hierarchies, when optimized, are nearly optimal and have the
same asymptotic computational complexity as non-geometric optimal hierarchies.
We discuss how enforcing constraints on parameters of MLMC hierarchies affects
the optimality of these hierarchies. These constraints include an upper and a
lower bound on the mesh size or enforcing that the number of samples and the
number of discretization elements are integers. We also discuss the optimal
tolerance splitting between the bias and the statistical error contributions
and its asymptotic behavior. To provide numerical grounds for our theoretical
results, we apply these optimized hierarchies together with the Continuation
MLMC Algorithm. The first example considers a three-dimensional elliptic
partial differential equation with random inputs. Its space discretization is
based on continuous piecewise trilinear finite elements and the corresponding
linear system is solved by either a direct or an iterative solver. The second
example considers a one-dimensional It\^o stochastic differential equation
discretized by a Milstein scheme
Discrepancy Bounds for Mixed Sequences
A mixed sequence is a sequence in the -dimensional unit cube
which one obtains by concatenating a -dimensional low-discrepancy
sequence with an -dimensional random sequence.
We discuss some probabilistic bounds on the star discrepancy of
mixed sequences
Calculation of aggregate loss distributions
Estimation of the operational risk capital under the Loss Distribution
Approach requires evaluation of aggregate (compound) loss distributions which
is one of the classic problems in risk theory. Closed-form solutions are not
available for the distributions typically used in operational risk. However
with modern computer processing power, these distributions can be calculated
virtually exactly using numerical methods. This paper reviews numerical
algorithms that can be successfully used to calculate the aggregate loss
distributions. In particular Monte Carlo, Panjer recursion and Fourier
transformation methods are presented and compared. Also, several closed-form
approximations based on moment matching and asymptotic result for heavy-tailed
distributions are reviewed
Multilevel Monte Carlo methods for applications in finance
Since Giles introduced the multilevel Monte Carlo path simulation method
[18], there has been rapid development of the technique for a variety of
applications in computational finance. This paper surveys the progress so far,
highlights the key features in achieving a high rate of multilevel variance
convergence, and suggests directions for future research.Comment: arXiv admin note: text overlap with arXiv:1202.6283; and with
arXiv:1106.4730 by other author
Multilevel Richardson-Romberg and Importance Sampling in Derivative Pricing
In this paper, we propose and analyze a novel combination of multilevel
Richardson-Romberg (ML2R) and importance sampling algorithm, with the aim of
reducing the overall computational time, while achieving desired
root-mean-squared error while pricing. We develop an idea to construct the
Monte-Carlo estimator that deals with the parametric change of measure. We rely
on the Robbins-Monro algorithm with projection, in order to approximate optimal
change of measure parameter, for various levels of resolution in our multilevel
algorithm. Furthermore, we propose incorporating discretization schemes with
higher-order strong convergence, in order to simulate the underlying stochastic
differential equations (SDEs) thereby achieving better accuracy. In order to do
so, we study the Central Limit Theorem for the general multilevel algorithm.
Further, we study the asymptotic behavior of our estimator, thereby proving the
Strong Law of Large Numbers. Finally, we present numerical results to
substantiate the efficacy of our developed algorithm
A Continuation Multilevel Monte Carlo algorithm
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for
weak approximation of stochastic models. The CMLMC algorithm solves the given
approximation problem for a sequence of decreasing tolerances, ending when the
required error tolerance is satisfied. CMLMC assumes discretization hierarchies
that are defined a priori for each level and are geometrically refined across
levels. The actual choice of computational work across levels is based on
parametric models for the average cost per sample and the corresponding weak
and strong errors. These parameters are calibrated using Bayesian estimation,
taking particular notice of the deepest levels of the discretization hierarchy,
where only few realizations are available to produce the estimates. The
resulting CMLMC estimator exhibits a non-trivial splitting between bias and
statistical contributions. We also show the asymptotic normality of the
statistical error in the MLMC estimator and justify in this way our error
estimate that allows prescribing both required accuracy and confidence in the
final result. Numerical results substantiate the above results and illustrate
the corresponding computational savings in examples that are described in terms
of differential equations either driven by random measures or with random
coefficients
Recommended from our members
Entropy, Randomization, Derandomization, and Discrepancy
The star discrepancy is a measure of how uniformly distributed a finite point set is in the d-dimensional unit cube. It is related to high-dimensional numerical integration of certain function classes as expressed by the Koksma-Hlawka inequality. A sharp version of this inequality states that the worst-case error of approximating the integral of functions from the unit ball of some Sobolev space by an equal-weight cubature is exactly the star discrepancy of the set of sample points. In many applications, as, e.g., in physics, quantum chemistry or finance, it is essential to approximate high-dimensional integrals. Thus with regard to the Koksma- Hlawka inequality the following three questions are very important: (i) What are good bounds with explicitly given dependence on the dimension d for the smallest possible discrepancy of any n-point set for moderate n? (ii) How can we construct point sets efficiently that satisfy such bounds? (iii) How can we calculate the discrepancy of given point sets efficiently? We want to discuss these questions and survey and explain some approaches to tackle them relying on metric entropy, randomization, and derandomization
- …