6 research outputs found

    Revisiting the fitting of the Nelson–Siegel and Svensson models

    Get PDF
    The Nelson–Siegel and the Svensson models are two widely used models for the term structure of interest rates. These models are quite simple and intuitive, but fitting them to market data is numerically challenging and various difficulties have been reported. In this paper, we provide a novel mathematical analysis of the fitting problem based on parametric optimization. We formulate the fitting problem as a separable nonlinear least-squares problem, in which the linear parameters can be eliminated. We provide a thorough discussion on the conditioning of the inner part of the reformulated problem and show that many of the reported difficulties encountered when solving it are inherent to the problem formulation itself and cannot be tackled by choosing a particular optimization algorithm. Our stability analysis provides novel insights that we use to show that some of the ill-conditioning can be avoided, and that a suitably chosen penalty approach can be used to address the remaining ill-conditioning. Numerical results indicate that this approach has the expected impact while being independent of any choice of a particular optimization algorithm. We further establish smoothness properties of the reduced objective function, putting global optimization methods for the reduced problem on a sound mathematical basis

    On rates of convergence for sample average approximations in the almost sure sense and in mean

    Get PDF

    A radial basis function method for noisy global optimisation

    Get PDF
    We present a novel response surface method for global optimisation of an expensive and noisy (black-box) objective function, where error bounds on the deviation of the observed noisy function values from their true counterparts are available. The method is based on Gutmann’s well-established RBF method for minimising an expensive and deterministic objective function, which has become popular both from a theoretical and practical perspective. To construct suitable radial basis function approximants to the objective function and to determine new sample points for successive evaluation of the expensive noisy objective, the method uses a regularised least-squares criterion. In particular, new points are defined by means of a target value, analogous to the original RBF method. We provide essential convergence results, and provide a numerical illustration of the method by means of a simple test problem

    Global optimisation of noisy grey-box functions with financial applications

    No full text
    Financial derivatives of both plain vanilla and exotic type are at the core of today’s financial industry. For the valuation of these derivatives, mathematical pricing models are used that rely on different approaches such as (semi-)analytical transform methods, PDE approximations or Monte Carlo simulations. The calibration of the models to market prices, i.e. the estimation of appropriate model parameters, is a crucial procedure for making them applicable to real markets. Due to inherent complexity of the models, this typically results in a nonconvex optimisation problem that is hard to solve, thus requiring advanced techniques.In this thesis, we study the general case of financial model calibration where model prices are approximated by standard Monte Carlo methods. We distinguish between two possibilities on how to employ Monte Carlo estimators along a calibration procedure: the simpler sample average approximation (SAA) strategy, which uses the same random sample for all function evaluations, and the more sophisticated variable sample average approximation (VSAA) strategy, which for each evaluation uses a different random sample with variable size. Both strategies have in common that they lead to (not prohibitively) expensive optimisation procedures providing approximating solutions to the original but unknown optimisation problem. Yet, whereas the former strategy results in a self-contained deterministic problem instance that may be fully solved by a suitable algorithm, the latter has to be considered together with a sequential sampling method that incorporates the strategy to select new evaluation points, which amounts to minimising a noisy objective function.For both strategies, we initially establish essential convergence properties for the (optimal) estimators in the almost sure sense. Specifically, in the case of the SAA strategy, we complement the well-established strong consistency of optimal estimators with their almost sure rates of convergence. This, in turn, allows to draw several useful conclusions on their asymptotic bias and other notions of convergence. In the case of the VSAA strategy, we give conditions for the strong uniform consistency of the objective function estimators and provide corresponding uniform sample path bounds. Both results may be used to show convergence of a sequential sampling method adopting the VSAA scheme.We then address the global optimisation within both considered procedures, and first present a novel modification to Gutmann’s radial basis function (RBF) method for expensive and deterministic objective functions that is more suited for deterministic calibration problems. This modification exploits the particular data-fitting structure of these problems and additionally enhances the inherent search mechanism of the original method by an extended local search. We show convergence of the modified method and demonstrate its effectiveness on relevant test problems and by calibrating the Hull-White interest rate model under the SAA strategy in a real-world setting. Moreover, as the method may be applied equally well to similar data-fitting problems that are not necessarily expensive, we also demonstrate its practicability by fitting the Nelson-Siegel and Svensson models to market zero rates.Based on the RBF method, we further present a novel method for the global optimisation of expensive and noisy objective functions, where the level of noise is controlled by means of error bounds. The method uses a regularised least-squares criterion to construct suitable radial basis function approximants, which are then also used to determine new sample points in a similar manner as the original RBF method. We provide convergence of the method, albeit under some simplifying assumption on the error bounds, and evaluate its applicability on relevant test problems and by calibrating the Hull-White interest rate model under the VSAA strategy.<br/

    On rates of convergence for sample average approximations in the almost sure sense and in mean

    No full text
    We study the rates at which optimal estimators in the sample averageapproximation approach converge to their deterministic counterparts in the almost sure sense and in mean. To be able to quantify these rates, we consider the law of the iterated logarithm in a Banach space setting and first establish under relatively mild assumptions almost sure convergence rates for the approximating objective functions, which can then be transferred to the estimators for optimal values and solutions of the approximated problem. By exploiting a characterisation of the law of the iterated logarithm in Banach spaces, we are further able to derive under the same assumptions that the estimators also converge in mean, at a rate which essentially coincides with the one in the almost sure sense. This, in turn, allows to quantify the asymptotic bias of optimal estimators as well as to draw conclusive insights on their mean squared error and on the estimators for the optimality gap. Finally, we address the notion of convergence in probability to derive rates in probability for the deviation of optimal estimators and (weak) rates of error probabilities without imposing strong conditions on exponential moments. We discuss the possibility to construct confidence sets for the optimal values and solutions from our obtained results and provide a numerical illustration of the most relevant findings
    corecore