research

Multilevel Richardson-Romberg extrapolation

Abstract

We propose and analyze a Multilevel Richardson-Romberg (MLRR) estimator which combines the higher order bias cancellation of the Multistep Richardson-Romberg method introduced in [Pa07] and the variance control resulting from the stratification introduced in the Multilevel Monte Carlo (MLMC) method (see [Hei01, Gi08]). Thus, in standard frameworks like discretization schemes of diffusion processes, the root mean squared error (RMSE) ε>0\varepsilon > 0 can be achieved with our MLRR estimator with a global complexity of ε2log(1/ε)\varepsilon^{-2} \log(1/\varepsilon) instead of ε2(log(1/ε))2\varepsilon^{-2} (\log(1/\varepsilon))^2 with the standard MLMC method, at least when the weak error E[Yh]E[Y0]\mathbf{E}[Y_h]-\mathbf{E}[Y_0] of the biased implemented estimator YhY_h can be expanded at any order in hh and YhY02=O(h12)\|Y_h - Y_0\|_2 = O(h^{\frac{1}{2}}). The MLRR estimator is then halfway between a regular MLMC and a virtual unbiased Monte Carlo. When the strong error YhY02=O(hβ2)\|Y_h - Y_0\|_2 = O(h^{\frac{\beta}{2}}), β<1\beta < 1, the gain of MLRR over MLMC becomes even more striking. We carry out numerical simulations to compare these estimators in two settings: vanilla and path-dependent option pricing by Monte Carlo simulation and the less classical Nested Monte Carlo simulation.Comment: 38 page

    Similar works