We propose and analyze a Multilevel Richardson-Romberg (MLRR) estimator which
combines the higher order bias cancellation of the Multistep Richardson-Romberg
method introduced in [Pa07] and the variance control resulting from the
stratification introduced in the Multilevel Monte Carlo (MLMC) method (see
[Hei01, Gi08]). Thus, in standard frameworks like discretization schemes of
diffusion processes, the root mean squared error (RMSE) ε>0 can
be achieved with our MLRR estimator with a global complexity of
ε−2log(1/ε) instead of ε−2(log(1/ε))2 with the standard MLMC method, at least when the weak
error E[Yh]−E[Y0] of the biased implemented estimator
Yh can be expanded at any order in h and ∥Yh−Y0∥2=O(h21). The MLRR estimator is then halfway between a regular MLMC
and a virtual unbiased Monte Carlo. When the strong error ∥Yh−Y0∥2=O(h2β), β<1, the gain of MLRR over MLMC becomes even
more striking. We carry out numerical simulations to compare these estimators
in two settings: vanilla and path-dependent option pricing by Monte Carlo
simulation and the less classical Nested Monte Carlo simulation.Comment: 38 page