724 research outputs found

    CALLABLE SWAPS, SNOWBALLS AND VIDEOGAMES

    Get PDF
    Although economically more meaningful than the alternatives, short rate models have been dismissed for financial engineering applications in favor of market models as the latter are more flexible and best suited to cluster computing implementations. In this paper, we argue that the paradigm shift toward GPU architectures currently taking place in the high performance computing world can potentially change the situation and tilt the balance back in favor of a new generation of short rate models. We find that operator methods provide a natural mathematical framework for the implementation of realistic short rate models that match features of the historical process such as stochastic monetary policy, calibrate well to liquid derivatives and provide new insights on complex structures. In this paper, we show that callable swaps, callable range accruals, target redemption notes (TARNs) and various flavors of snowballs and snowblades can be priced with methods numerically as precise, fast and stable as the ones based on analytic closed form solutions by means of BLAS level-3 methods on massively parallel GPU architectures.Interest Rate Derivatives; stochastic monetary policy; callable swaps; snowballs; GPU programming; operator methods

    gpusvcalibration: A R Package for Fast Stochastic Volatility Model Calibration Using GPUs

    Get PDF
    In this paper we describe the gpusvcalibration R package for accelerating stochastic volatility model calibration on GPUs. The package is designed for use with existing CRAN packages for optimization such as DEOptim and nloptr. Stochastic volatility models are used extensively across the capital markets for pricing and risk management of exchange traded financial options. However, there are many challenges to calibration, including comparative assessment of the robustness of different models and optimization routines. For example, we observe that when fitted to sub-minute level midmarket quotes, models require frequent calibration every few minutes and the quality of the fit is routine sensitive. The R statistical software environment is popular with quantitative analysts in the financial industry partly because it facilitates application design space exploration. However, a typical R based implementation of a stochastic volatility model calibration on a CPU does not meet the performance requirements for sub-minute level trading, i.e. mid to high frequency trading.We identified the most computationally intensive part of the calibration process in R and off-loaded that to the GPU.We created a map-reduce interface to the computationally intensive kernel so that it can be easily integrated in a variety of R based calibration codes using our package. We demonstrate that the new R based implementation using our package is comparable in performance to aC=C++ GPU based calibration code

    Analysis and numerical methods for stochastic volatility models in valuation of financial derivatives

    Get PDF
    [Abstract] The main objective of this thesis concerns to the study of the SABR stochastic volatility model for the underlyings (equity or interest rates) in order to price several market derivatives. When dealing with interest rate derivatives the SABR model is joined with the LIBOR market model (LMM) which is the most popular interest rate model in our days. In order to price derivatives we take advantage not only of Monte Carlo algorithms but also of the numerical resolution of the partial di erential equations (PDEs) associated with these models. The PDEs related to SABR/LIBOR market models are high dimensional in space. In order to cope with the curse of dimensionality we will take advantage of sparse grids. Furthermore, a detailed discussion about the calibration of the parameters of these models to market prices is included. To this end the Simulated Annealing global stochastic minimization algorithm is proposed. The above mentioned algorithms involve a high computational cost. In order to price derivatives and calibrate the models as soon as possible we will make use of high performance computing (HPC) techniques (multicomputers, multiprocessors and GPUs). Finally, we design a novel algorithm based on Least-Squares Monte Carlo (LSMC) in order to approximate the solution of Backward Stochastic Di erential Equations (BSDEs).[Resumen] El objetivo principal de la tesis se centra en el estudio del modelo de volatilidad estocástica SABR para los subyacentes (activos o tipos de interés) con vista a la valoración de diferentes productos derivados. En el caso de los derivados de tipos de interés, el modelo SABR se combina con el modelo de mercado de tipos de interés más popular en estos momentos, el LIBOR market model (LMM). Los métodos numéricos de valoración son fundamentalmente de tipo Monte Carlo y la resolución numérica de los modelos de ecuaciones en derivadas parciales (EDPs) correspondientes. Las EDPs asociadas a modelos SABR/LIBOR tienen alta dimensión en espacio, por lo que se estudian técnicas de sparse grid para vencer la maldición de la dimensión. Además, se discute detalladamente cómo calibrar los parámetros de los modelos a las cotizaciones de mercado, para lo cual se propone el uso del algoritmo de optimización global estocástica Simulated Annealing. Los algoritmos citados tienen un alto coste computacional. Con el objetivo de que tanto las valoraciones como las calibraciones se hagan en el menor tiempo posible se emplean diferentes técnicas de computación de altas prestaciones (multicomputadores, multiprocesadores y GPUs.) Finalmente se dise~na un nuevo algoritmo basado en Least-Squares Monte Carlo (LSMC) para aproximar la solución de Backward Stochastic Differential Equations (BSDEs).[Resumo] O obxectivo principal da tese céntrase no estudo do modelo de volatilidade estocástica SABR para os subxacentes (activos ou tipos de xuro) con vista á valoración de diferentes produtos derivados. No caso dos derivados de tipos de xuro, o modelo SABR combínase co modelo de mercado de tipos de xuro máis popular nestos momentos, o LIBOR market model (LMM). Os métodos numéricos de valoración son fundamentalmente de tipo Monte Carlo e a resolución numérica dos modelos de ecuacións en derivadas parciais (EDPs) correspondentes. As EDPs asociadas aos modelos SABR/LIBOR te~nen alta dimensión en espazo, polo que se estudan técnicas de sparse grid para vencer a maldición da dimensión. Ademais, discútese detalladamente como calibrar os parámetros dos modelos ás cotizacións de mercado, para o cal se propón o emprego do algoritmo de optimización global estocástica Simulated Annealing. Os algoritmos citados te~nen un alto custo computacional. Co obxectivo de que tanto as valoracións como as calibracións se fagan no menor tempo posible empréganse diferentes técnicas de computación de altas prestacións (multicomputadores, multiprocesadores e GPUs.) Finalmente deséñase un novo algoritmo baseado en Least-Squares Monte Carlo (LSMC) para aproximar a solución de Backward Stochastic Differential Equations (BSDEs)

    Heterogeneous Agent Model With Real Business Cycle With Application In Optimal Tax Policy And Social Welfare Reform

    Get PDF
    In this paper, we develop a dynamic stochastic general equilibrium (DSGE) model with financial friction and incomplete risk-sharing among overlapping-generation (OLG) heterogeneous households. The economy is embedded with taxation system and social security system calibrated to current U.S. economy and tax policy, as well as elastic labor supply. Our baseline model can match wealth-income disparity and moment conditions in financial market as well as macroeconomic variables. In baseline setting, the mean risk-free rate is 1.36%\% per year, the unlevered equity premium is 4.08%\%, and Gini coefficient for labor earning and total income is 0.65 and 0.51 respectively. The equity risk premium is driven by incomplete risk sharing among household and participation barrier to equity market. Furthermore, our model can act as workhorse model for policy experiment including debt policy, wealth tax reform, capital income tax reform and social security system reform. This paper could be beneficial to policy maker to understand the impact of policy change to macroeconomy and household-level behavior

    Calcul parallèle pour les problèmes linéaires, non-linéaires et linéaires inverses en finance

    Get PDF
    Handling multidimensional parabolic linear, nonlinear and linear inverse problems is the main objective of this work. It is the multidimensional word that makes virtually inevitable the use of simulation methods based on Monte Carlo. This word also makes necessary the use of parallel architectures. Indeed, the problems dealing with a large number of assets are major resources consumers, and only parallelization is able to reduce their execution times. Consequently, the first goal of our work is to propose "appropriate" random number generators to parallel and massively parallel architecture implemented on CPUs/GPUs cluster. We quantify the speedup and the energy consumption of the parallel execution of a European pricing. The second objective is to reformulate the nonlinear problem of pricing American options in order to get the same parallelization gains as those obtained for linear problems. In addition to its parallelization suitability, the proposed method based on Malliavin calculus has other practical advantages. Continuing with parallel algorithms, the last point of this work is dedicated to the uniqueness of the solution of some linear inverse problems in finance. This theoretical study enables the use of simple methods based on Monte CarloDe ce fait, le premier objectif de notre travail consiste à proposer des générateurs de nombres aléatoires appropriés pour des architectures parallèles et massivement parallèles de clusters de CPUs/GPUs. Nous testerons le gain en temps de calcul et l'énergie consommée lors de l'implémentation du cas linéaire du pricing européen. Le deuxième objectif est de reformuler le problème non-linéaire du pricing américain pour que l'on puisse avoir des gains de parallélisation semblables à ceux obtenus pour les problèmes linéaires. La méthode proposée fondée sur le calcul de Malliavin est aussi plus avantageuse du point de vue du praticien au delà même de l'intérêt intrinsèque lié à la possibilité d'une bonne parallélisation. Toujours dans l'objectif de proposer des algorithmes parallèles, le dernier point est l'étude de l'unicité de la solution de certains cas linéaires inverses en finance. Cette unicité aide en effet à avoir des algorithmes simples fondés sur Monte Carl

    Accelerated Adjoint Algorithmic Differentiation with Applications in Finance

    Get PDF
    Adjoint Differentiation's (AD) ability to calculate Greeks efficiently and to machine precision while scaling in constant time to the number of input variables is attractive for calibration and hedging where frequent calculations are required. Algorithmic adjoint differentiation tools automatically generates derivative code and provide interesting challenges in both Computer Science and Mathematics. In this dissertation we focus on a manual implementation with particular emphasis on parallel processing using Graphics Processing Units (GPUs) to accelerate run times. Adjoint differentiation is applied to a Call on Max rainbow option with 3 underlying assets in a Monte Carlo environment. Assets are driven by the Heston stochastic volatility model and implemented using the Milstein discretisation scheme with truncation. The price is calculated along with Deltas and Vegas for each asset, at a total of 6 sensitivities. The application achieves favourable levels of parallelism on all three dimensions implemented by the GPU: Instruction Level Parallelism (ILP), Thread level parallelism (TLP), and Single Instruction Multiple Data (SIMD). We estimate the forward pass of the Milstein discretisation contains an ILP of 3.57 which is between the average range of 2-4. Monte Carlo simulations are embarrassingly parallel and are capable of achieving a high level of concurrency. However, in this context a single kernel running at low occupancy can perform better with a combination of Shared memory, vectorized data structures and a high register count per thread. Run time on the Intel Xeon CPU with 501 760 paths and 360 time steps takes 48.801 seconds. The GT950 Maxwell GPU completed in 0.115 seconds, achieving an 422⇥ speedup and a throughput of 13 million paths per second. The K40 is capable of achieving better performance

    Evolutionary Algorithms and Computational Methods for Derivatives Pricing

    Get PDF
    This work aims to provide novel computational solutions to the problem of derivative pricing. To achieve this, a novel hybrid evolutionary algorithm (EA) based on particle swarm optimisation (PSO) and differential evolution (DE) is introduced and applied, along with various other state-of-the-art variants of PSO and DE, to the problem of calibrating the Heston stochastic volatility model. It is found that state-of-the-art DEs provide excellent calibration performance, and that previous use of rudimentary DEs in the literature undervalued the use of these methods. The use of neural networks with EAs for approximating the solution to derivatives pricing models is next investigated. A set of neural networks are trained from Monte Carlo (MC) simulation data to approximate the closed form solution for European, Asian and American style options. The results are comparable to MC pricing, but with offline evaluation of the price using the neural networks being orders of magnitudes faster and computationally more efficient. Finally, the use of custom hardware for numerical pricing of derivatives is introduced. The solver presented here provides an energy efficient data-flow implementation for pricing derivatives, which has the potential to be incorporated into larger high-speed/low energy trading systems

    Autonomous management of cost, performance, and resource uncertainty for migration of applications to infrastructure-as-a-service (IaaS) clouds

    Get PDF
    2014 Fall.Includes bibliographical references.Infrastructure-as-a-Service (IaaS) clouds abstract physical hardware to provide computing resources on demand as a software service. This abstraction leads to the simplistic view that computing resources are homogeneous and infinite scaling potential exists to easily resolve all performance challenges. Adoption of cloud computing, in practice however, presents many resource management challenges forcing practitioners to balance cost and performance tradeoffs to successfully migrate applications. These challenges can be broken down into three primary concerns that involve determining what, where, and when infrastructure should be provisioned. In this dissertation we address these challenges including: (1) performance variance from resource heterogeneity, virtualization overhead, and the plethora of vaguely defined resource types; (2) virtual machine (VM) placement, component composition, service isolation, provisioning variation, and resource contention for multitenancy; and (3) dynamic scaling and resource elasticity to alleviate performance bottlenecks. These resource management challenges are addressed through the development and evaluation of autonomous algorithms and methodologies that result in demonstrably better performance and lower monetary costs for application deployments to both public and private IaaS clouds. This dissertation makes three primary contributions to advance cloud infrastructure management for application hosting. First, it includes design of resource utilization models based on step-wise multiple linear regression and artificial neural networks that support prediction of better performing component compositions. The total number of possible compositions is governed by Bell's Number that results in a combinatorially explosive search space. Second, it includes algorithms to improve VM placements to mitigate resource heterogeneity and contention using a load-aware VM placement scheduler, and autonomous detection of under-performing VMs to spur replacement. Third, it describes a workload cost prediction methodology that harnesses regression models and heuristics to support determination of infrastructure alternatives that reduce hosting costs. Our methodology achieves infrastructure predictions with an average mean absolute error of only 0.3125 VMs for multiple workloads
    corecore