1,067 research outputs found

    Local Volatility Calibration by Optimal Transport

    Full text link
    The calibration of volatility models from observable option prices is a fundamental problem in quantitative finance. The most common approach among industry practitioners is based on the celebrated Dupire's formula [6], which requires the knowledge of vanilla option prices for a continuum of strikes and maturities that can only be obtained via some form of price interpolation. In this paper, we propose a new local volatility calibration technique using the theory of optimal transport. We formulate a time continuous martingale optimal transport problem, which seeks a martingale diffusion process that matches the known densities of an asset price at two different dates, while minimizing a chosen cost function. Inspired by the seminal work of Benamou and Brenier [1], we formulate the problem as a convex optimization problem, derive its dual formulation, and solve it numerically via an augmented Lagrangian method and the alternative direction method of multipliers (ADMM) algorithm. The solution effectively reconstructs the dynamic of the asset price between the two dates by recovering the optimal local volatility function, without requiring any time interpolation of the option prices

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

    Full text link
    We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1: small correction in proof (but not statement of) lemma 3.15; description of Besov spaces in intro and app A clarified (and corrected); smaller pointsize (making 30 instead of 38 pages

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted â„“2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view
    • …
    corecore