10,812 research outputs found

    Pricing and Risk Management with High-Dimensional Quasi Monte Carlo and Global Sensitivity Analysis

    Full text link
    We review and apply Quasi Monte Carlo (QMC) and Global Sensitivity Analysis (GSA) techniques to pricing and risk management (greeks) of representative financial instruments of increasing complexity. We compare QMC vs standard Monte Carlo (MC) results in great detail, using high-dimensional Sobol' low discrepancy sequences, different discretization methods, and specific analyses of convergence, performance, speed up, stability, and error optimization for finite differences greeks. We find that our QMC outperforms MC in most cases, including the highest-dimensional simulations and greeks calculations, showing faster and more stable convergence to exact or almost exact results. Using GSA, we are able to fully explain our findings in terms of reduced effective dimension of our QMC simulation, allowed in most cases, but not always, by Brownian bridge discretization. We conclude that, beyond pricing, QMC is a very promising technique also for computing risk figures, greeks in particular, as it allows to reduce the computational effort of high-dimensional Monte Carlo simulations typical of modern risk management.Comment: 43 pages, 21 figures, 6 table

    Greedy vector quantization

    Get PDF
    We investigate the greedy version of the LpL^p-optimal vector quantization problem for an Rd\mathbb{R}^d-valued random vector X ⁣LpX\!\in L^p. We show the existence of a sequence (aN)N1(a_N)_{N\ge 1} such that aNa_N minimizes amin1iN1XaiXaLpa\mapsto\big \|\min_{1\le i\le N-1}|X-a_i|\wedge |X-a|\big\|_{L^p} (LpL^p-mean quantization error at level NN induced by (a1,,aN1,a)(a_1,\ldots,a_{N-1},a)). We show that this sequence produces LpL^p-rate optimal NN-tuples a(N)=(a1,,aN)a^{(N)}=(a_1,\ldots,a_{_N}) (i.e.i.e. the LpL^p-mean quantization error at level NN induced by a(N)a^{(N)} goes to 00 at rate N1dN^{-\frac 1d}). Greedy optimal sequences also satisfy, under natural additional assumptions, the distortion mismatch property: the NN-tuples a(N)a^{(N)} remain rate optimal with respect to the LqL^q-norms, pq<p+dp\le q <p+d. Finally, we propose optimization methods to compute greedy sequences, adapted from usual Lloyd's I and Competitive Learning Vector Quantization procedures, either in their deterministic (implementable when d=1d=1) or stochastic versions.Comment: 31 pages, 4 figures, few typos corrected (now an extended version of an eponym paper to appear in Journal of Approximation
    corecore