1,430 research outputs found

    IMPLICATIONS OF CROP YIELD AND REVENUE INSURANCE FOR PRODUCER HEDGING

    Get PDF
    New types of crop insurance have expanded the tools from which crop producers may choose to manage risk. Little is known regarding how these products interact with futures and options. This analysis examines optimal futures and put ratios in the presence of four alternative insurance coverages. An analytical model investigates the comparative statics of the relationship between hedging and insurance. Additional numerical analysis is conducted which incorporates futures price, basis, and yield variability. Yield insurance is found to have a positive effect on hedging levels. Revenue insurance tends to result in slightly lower hedging demand than would occur given the same level of yield insurance coverage.Risk and Uncertainty,

    Accelerating Quadrature Methods for Option Valuation

    Full text link

    Accelerating Reconfigurable Financial Computing

    Get PDF
    This thesis proposes novel approaches to the design, optimisation, and management of reconfigurable computer accelerators for financial computing. There are three contributions. First, we propose novel reconfigurable designs for derivative pricing using both Monte-Carlo and quadrature methods. Such designs involve exploring techniques such as control variate optimisation for Monte-Carlo, and multi-dimensional analysis for quadrature methods. Significant speedups and energy savings are achieved using our Field-Programmable Gate Array (FPGA) designs over both Central Processing Unit (CPU) and Graphical Processing Unit (GPU) designs. Second, we propose a framework for distributing computing tasks on multi-accelerator heterogeneous clusters. In this framework, different computational devices including FPGAs, GPUs and CPUs work collaboratively on the same financial problem based on a dynamic scheduling policy. The trade-off in speed and in energy consumption of different accelerator allocations is investigated. Third, we propose a mixed precision methodology for optimising Monte-Carlo designs, and a reduced precision methodology for optimising quadrature designs. These methodologies enable us to optimise throughput of reconfigurable designs by using datapaths with minimised precision, while maintaining the same accuracy of the results as in the original designs

    Characteristic functions in the Cheyette Interest Rate Model

    Get PDF
    We investigate the characteristic functions of multi-factor Cheyette Models and the application to the valuation of interest rate derivatives. The model dynamic can be classiffied as an affine-diffusion process implying an exponential structure of the characteristic function. The characteristic function is determined by a model specific system of ODEs, that can be solved explicitly for arbitrary Cheyette Models. The necessary transform inversion turns out to be numerically stable as a singularity can be removed. Thus the pricing methodology is reliable and we use it for the calibration of multi-factor Cheyette Models to caps. --Cheyette Model,Characteristic Function,Fourier Transform,Calibration of Multi-Factor Models

    Sequential Monte Carlo pricing of American-style options under stochastic volatility models

    Full text link
    We introduce a new method to price American-style options on underlying investments governed by stochastic volatility (SV) models. The method does not require the volatility process to be observed. Instead, it exploits the fact that the optimal decision functions in the corresponding dynamic programming problem can be expressed as functions of conditional distributions of volatility, given observed data. By constructing statistics summarizing information about these conditional distributions, one can obtain high quality approximate solutions. Although the required conditional distributions are in general intractable, they can be arbitrarily precisely approximated using sequential Monte Carlo schemes. The drawback, as with many Monte Carlo schemes, is potentially heavy computational demand. We present two variants of the algorithm, one closely related to the well-known least-squares Monte Carlo algorithm of Longstaff and Schwartz [The Review of Financial Studies 14 (2001) 113-147], and the other solving the same problem using a "brute force" gridding approach. We estimate an illustrative SV model using Markov chain Monte Carlo (MCMC) methods for three equities. We also demonstrate the use of our algorithm by estimating the posterior distribution of the market price of volatility risk for each of the three equities.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS286 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Efficient Numerical Methods for Pricing American Options under Lévy Models

    Get PDF
    Two new numerical methods for the valuation of American and Bermudan options are proposed, which admit a large class of asset price models for the underlying. In particular, the methods can be applied with Lévy models that admit jumps in the asset price. These models provide a more realistic description of market prices and lead to better calibration results than the well-known Black-Scholes model. The proposed methods are not based on the indirect approach via partial differential equations, but directly compute option prices as risk-neutral expectation values. The expectation values are approximated by numerical quadrature methods. While this approach is initially limited to European options, the proposed combination with interpolation methods also allows for pricing of Bermudan and American options. Two different interpolation methods are used. These are cubic splines on the one hand and a mesh-free interpolation by radial basis functions on the other hand. The resulting valuation methods allow for an adaptive space discretization and error control. Their numerical properties are analyzed and, finally, the methods are validated and tested against various single-asset and multi-asset options under different market models

    Automated optimization of reconfigurable designs

    Get PDF
    Currently, the optimization of reconfigurable design parameters is typically done manually and often involves substantial amount effort. The main focus of this thesis is to reduce this effort. The designer can focus on the implementation and design correctness, leaving the tools to carry out optimization. To address this, this thesis makes three main contributions. First, we present initial investigation of reconfigurable design optimization with the Machine Learning Optimizer (MLO) algorithm. The algorithm is based on surrogate model technology and particle swarm optimization. By using surrogate models the long hardware generation time is mitigated and automatic optimization is possible. For the first time, to the best of our knowledge, we show how those models can both predict when hardware generation will fail and how well will the design perform. Second, we introduce a new algorithm called Automatic Reconfigurable Design Efficient Global Optimization (ARDEGO), which is based on the Efficient Global Optimization (EGO) algorithm. Compared to MLO, it supports parallelism and uses a simpler optimization loop. As the ARDEGO algorithm uses multiple optimization compute nodes, its optimization speed is greatly improved relative to MLO. Hardware generation time is random in nature, two similar configurations can take vastly different amount of time to generate making parallelization complicated. The novelty is efficient use of the optimization compute nodes achieved through extension of the asynchronous parallel EGO algorithm to constrained problems. Third, we show how results of design synthesis and benchmarking can be reused when a design is ported to a different platform or when its code is revised. This is achieved through the new Auto-Transfer algorithm. A methodology to make the best use of available synthesis and benchmarking results is a novel contribution to design automation of reconfigurable systems.Open Acces

    gpusvcalibration: A R Package for Fast Stochastic Volatility Model Calibration Using GPUs

    Get PDF
    In this paper we describe the gpusvcalibration R package for accelerating stochastic volatility model calibration on GPUs. The package is designed for use with existing CRAN packages for optimization such as DEOptim and nloptr. Stochastic volatility models are used extensively across the capital markets for pricing and risk management of exchange traded financial options. However, there are many challenges to calibration, including comparative assessment of the robustness of different models and optimization routines. For example, we observe that when fitted to sub-minute level midmarket quotes, models require frequent calibration every few minutes and the quality of the fit is routine sensitive. The R statistical software environment is popular with quantitative analysts in the financial industry partly because it facilitates application design space exploration. However, a typical R based implementation of a stochastic volatility model calibration on a CPU does not meet the performance requirements for sub-minute level trading, i.e. mid to high frequency trading.We identified the most computationally intensive part of the calibration process in R and off-loaded that to the GPU.We created a map-reduce interface to the computationally intensive kernel so that it can be easily integrated in a variety of R based calibration codes using our package. We demonstrate that the new R based implementation using our package is comparable in performance to aC=C++ GPU based calibration code
    • …
    corecore