2,783 research outputs found

    Accelerating the Fourier split operator method via graphics processing units

    Full text link
    Current generations of graphics processing units have turned into highly parallel devices with general computing capabilities. Thus, graphics processing units may be utilized, for example, to solve time dependent partial differential equations by the Fourier split operator method. In this contribution, we demonstrate that graphics processing units are capable to calculate fast Fourier transforms much more efficiently than traditional central processing units. Thus, graphics processing units render efficient implementations of the Fourier split operator method possible. Performance gains of more than an order of magnitude as compared to implementations for traditional central processing units are reached in the solution of the time dependent Schr\"odinger equation and the time dependent Dirac equation

    Pricing of early-exercise Asian options under L\'evy processes based on Fourier cosine expansions

    Get PDF
    In this article, we propose a pricing method for Asian options with early-exercise features. It is based on a two-dimensional integration and a backward recursion of the Fourier coefficients, in which several numerical techniques, like Fourier cosine expansions, Clenshawā€“Curtis quadrature and the Fast Fourier Transform (FFT) are employed. Rapid convergence of the pricing method is illustrated by an error analysis. Its performance is further demonstrated by various numerical examples, where we also show the power of an implementation on Graphics Processing Units (GPUs)

    Pricing Early-Exercise and Discrete Barrier Options by Fourier-Cosine Series Expansions

    Get PDF
    We present a pricing method based on Fourier-cosine expansions for early-exercise and discretely-monitored barrier options. The method works well for exponential Levy asset price models. The error convergence is exponential for processes characterized by very smooth transitional probability density functions. The computational complexity is O((Māˆ’1)Nlogā”N)O((M-1) N \log{N}) with NN a (small) number of terms from the series expansion, and MM, the number of early-exercise/monitoring dates.

    Lower Precision calculation for option pricing

    Get PDF
    The problem of options pricing is one of the most critical issues and fundamental building blocks in mathematical finance. The research includes deployment of lower precision type in two options pricing algorithms: Black-Scholes and Monte Carlo simulation. We make an assumption that the shorter the number used for calculations is (in bits), the more operations we are able to perform in the same time. The results are examined by a comparison to the outputs of single and double precision types. The major goal of the study is to indicate whether the lower precision types can be used in financial mathematics. The findings indicate that Black-Scholes provided more precise outputs than the basic implementation of Monte Carlo simulation. Modification of the Monte Carlo algorithm is also proposed. The research shows the limitations and opportunities of the lower precision type usage. In order to benefit from the application in terms of the time of calculation improved algorithms can be implemented on GPU or FPGA. We conclude that under particular restrictions the lower precision calculation can be used in mathematical finance.

    The GPU vs Phi Debate: Risk Analytics Using Many-Core Computing

    Get PDF
    The risk of reinsurance portfolios covering globally occurring natural catastrophes, such as earthquakes and hurricanes, is quantified by employing simulations. These simulations are computationally intensive and require large amounts of data to be processed. The use of many-core hardware accelerators, such as the Intel Xeon Phi and the NVIDIA Graphics Processing Unit (GPU), are desirable for achieving high-performance risk analytics. In this paper, we set out to investigate how accelerators can be employed in risk analytics, focusing on developing parallel algorithms for Aggregate Risk Analysis, a simulation which computes the Probable Maximum Loss of a portfolio taking both primary and secondary uncertainties into account. The key result is that both hardware accelerators are useful in different contexts; without taking data transfer times into account the Phi had lowest execution times when used independently and the GPU along with a host in a hybrid platform yielded best performance.Comment: A modified version of this article is accepted to the Computers and Electrical Engineering Journal under the title - "The Hardware Accelerator Debate: A Financial Risk Case Study Using Many-Core Computing"; Blesson Varghese, "The Hardware Accelerator Debate: A Financial Risk Case Study Using Many-Core Computing," Computers and Electrical Engineering, 201

    Deep Learning algorithms for solving high dimensional nonlinear Backward Stochastic Differential Equations

    Full text link
    We study deep learning-based schemes for solving high dimensional nonlinear backward stochastic differential equations (BSDEs). First we show how to improve the performances of the proposed scheme in [W. E and J. Han and A. Jentzen, Commun. Math. Stat., 5 (2017), pp.349-380] regarding computational time by using a single neural network architecture instead of the stacked deep neural networks. Furthermore, those schemes can be stuck in poor local minima or diverges, especially for a complex solution structure and longer terminal time. To solve this problem, we investigate to reformulate the problem by including local losses and exploit the Long Short Term Memory (LSTM) networks which are a type of recurrent neural networks (RNN). Finally, in order to study numerical convergence and thus illustrate the improved performances with the proposed methods, we provide numerical results for several 100-dimensional nonlinear BSDEs including nonlinear pricing problems in finance.Comment: 21 pages, 5 figures, 16 table

    Option Pricing on the GPU with Backward Stochastic Differential Equation

    Full text link

    Analyzing CUDA workloads using a detailed GPU simulator

    Full text link
    • ā€¦
    corecore