2,211 research outputs found

    The GPU vs Phi Debate: Risk Analytics Using Many-Core Computing

    Get PDF
    The risk of reinsurance portfolios covering globally occurring natural catastrophes, such as earthquakes and hurricanes, is quantified by employing simulations. These simulations are computationally intensive and require large amounts of data to be processed. The use of many-core hardware accelerators, such as the Intel Xeon Phi and the NVIDIA Graphics Processing Unit (GPU), are desirable for achieving high-performance risk analytics. In this paper, we set out to investigate how accelerators can be employed in risk analytics, focusing on developing parallel algorithms for Aggregate Risk Analysis, a simulation which computes the Probable Maximum Loss of a portfolio taking both primary and secondary uncertainties into account. The key result is that both hardware accelerators are useful in different contexts; without taking data transfer times into account the Phi had lowest execution times when used independently and the GPU along with a host in a hybrid platform yielded best performance.Comment: A modified version of this article is accepted to the Computers and Electrical Engineering Journal under the title - "The Hardware Accelerator Debate: A Financial Risk Case Study Using Many-Core Computing"; Blesson Varghese, "The Hardware Accelerator Debate: A Financial Risk Case Study Using Many-Core Computing," Computers and Electrical Engineering, 201

    Accelerating Reconfigurable Financial Computing

    Get PDF
    This thesis proposes novel approaches to the design, optimisation, and management of reconfigurable computer accelerators for financial computing. There are three contributions. First, we propose novel reconfigurable designs for derivative pricing using both Monte-Carlo and quadrature methods. Such designs involve exploring techniques such as control variate optimisation for Monte-Carlo, and multi-dimensional analysis for quadrature methods. Significant speedups and energy savings are achieved using our Field-Programmable Gate Array (FPGA) designs over both Central Processing Unit (CPU) and Graphical Processing Unit (GPU) designs. Second, we propose a framework for distributing computing tasks on multi-accelerator heterogeneous clusters. In this framework, different computational devices including FPGAs, GPUs and CPUs work collaboratively on the same financial problem based on a dynamic scheduling policy. The trade-off in speed and in energy consumption of different accelerator allocations is investigated. Third, we propose a mixed precision methodology for optimising Monte-Carlo designs, and a reduced precision methodology for optimising quadrature designs. These methodologies enable us to optimise throughput of reconfigurable designs by using datapaths with minimised precision, while maintaining the same accuracy of the results as in the original designs

    Path Integral Calculations for Option Pricing

    Get PDF
    Since the initiation of options trading by the Chicago Board Options Exchange in 1973, the financial markets have experienced substantial growth in options trading. As of 2022, the trading volume reached an astounding 10.32 billion contracts, with the gross market value of over-the-counter derivatives, including options, amounting to 20.7trillionandanotionalvalueof20.7 trillion and a notional value of 618 trillion. This growth underscores the critical role of options trading in modern finance. The pricing of options is a highly mathematical task, influenced by multiple factors such as asset volatility, time until expiration, interest rates, and market unpredictability. Accurate pricing is essential not only for profit maximization but also for mitigating systemic risks, as evidenced by the 2007-2008 financial crisis where mispriced mortgage derivatives played a significant role. Consequently, there’s an increasing demand for more detailed and computationally efficient pricing methodologies. This study explores the application of the quantum mechanical path integral method introduced by R. Feynman to option pricing. This approach combines the probabilistic foundations of quantum mechanics with financial modeling. Traditionally used in physics to calculate particle transition probabilities with astonishing accuracy, path integrals offer also a method to model the paths of asset prices as a function of time. Numerical integration of path integrals with Monte Carlo simulations provides an interesting multidisciplinary method for simulating complex processes inherent in financial markets. A significant aspect of this research is in the comparison of the quantum mechanical path integral Monte Carlo simulation framework with the traditional option pricing methods. The results indicate that the path integral formalism can replicate well-known results, and can be easily extended to valuate more complicated options. Furthermore, the results of this research clarify the quantum mechanical aspects of option pricing and present both the theoretical framework and efficient numerical solutions in a comprehensible manner. Through this, the study aims to contribute to the advancement of financial modeling and risk management strategies, marking a step forward in the intersection of quantum physics and financial economics.OptiokaupankĂ€ynnin osuus finanssimarkkoinoilla on kasvanut merkittĂ€vĂ€sti Chicago Board Options Exchangen aloitettua kaupankĂ€ynnin optioilla vuonna 1973. Vuonna 2022 optioilla tehtyjen kauppojen mÀÀrĂ€ ylitti 10,32 miljardia. NĂ€iden johdannaiskauppojen bruttoarvo ylitti 20,7 biljoonaa ja nimellisarvo 618 biljoonaa dollaria. TĂ€mĂ€ kasvu korostaa optiokaupan merkistystĂ€ nykyaikaisilla rahoitusmarkkinoilla. Optioiden hinnoittelu on matemaattisesti haastava tehtĂ€vĂ€, johon vaikuttaa monta tekijÀÀ, kuten volatiliteetti, voimassaoloaika, korot sekĂ€ markkinoiden arvaamaton luonne. Tarkka hinnoittelu on voittojen maksimoinnin lisĂ€ksi tĂ€rkeÀÀ jĂ€rjestelmĂ€riskien vĂ€lttĂ€miseksi, kuten opittiin vuosien 2007-2008 finanssikriisistĂ€, jossa vÀÀrin hinnoitellut asuntolainajohdannaiset olivat merkittĂ€vĂ€ssĂ€ roolissa. NĂ€in ollen kysyntÀÀ entistĂ€ tarkemmille ja laskennallisesti tehokkaammille optioiden hinnoittelumenetelmille on runsaasti. TĂ€ssĂ€ tutkielmassa tarkastellaan R. Feynmanin esittĂ€mĂ€n kvanttimekaanisen polkuintegraalimenetelmĂ€n soveltamista optioiden hinnoitteluun. MenetelmĂ€ yhdistÀÀ kvanttimekaniikan probabilistiset lĂ€htökohdat ja rahoitusteorian. PolkuintegraalimenetelmÀÀ on kĂ€ytetty menestyksekkÀÀsti hiukkasten tilasiirtymien laskemiseen, minkĂ€ lisĂ€ksi menetelmÀÀ voidaan soveltaa mallintamaan rahoitusintrumenttien arvon muutosta ajan funktiona. Polkuintegraalien numeerinen ratkaiseminen Monte Carlo -simulaatioita kĂ€yttĂ€mĂ€llĂ€ luo vahvan pohjan poikkitieteelliselle menetelmĂ€lle rahoitusmarkkinoiden monimutkaisten prosessien simuloimiseksi. Tutkielmassa vertaillaan kvanttimekaanista polkuintegraali-Monte Carlo -menetelmÀÀ perinteisiin optioiden hinnoittelussa kĂ€ytettyihin menetelmiin. Tuloksista nĂ€hdÀÀn, ettĂ€ polkuintegraalimenetelmĂ€llĂ€ voidaan pÀÀstĂ€ samoihin tuloksiin tunnettujen mallien kanssa, ja ettĂ€ menetelmÀÀ voidaan soveltaa hyvin monimutkaisempien optioiden hinnoittelussa. Työn tarkoituksena on selventÀÀ menetelmĂ€n kvanttimekaanisista luonnetta ja esitellĂ€ taustalla oleva teoreettinen viitekehys, sekĂ€ selkeĂ€ ja tehokas menetelmĂ€ numeerisen ratkaisun mahdollistamiseksi. Tutkielman tavoitteena on edistÀÀ rahoitusmallinnuksen ja riskienhallinnan menetelmiĂ€, ja vahvistaa kvanttimekaniikan ja taloustieteen vĂ€lisiĂ€ yhtenevĂ€isyyksiĂ€

    Lower Precision calculation for option pricing

    Get PDF
    The problem of options pricing is one of the most critical issues and fundamental building blocks in mathematical finance. The research includes deployment of lower precision type in two options pricing algorithms: Black-Scholes and Monte Carlo simulation. We make an assumption that the shorter the number used for calculations is (in bits), the more operations we are able to perform in the same time. The results are examined by a comparison to the outputs of single and double precision types. The major goal of the study is to indicate whether the lower precision types can be used in financial mathematics. The findings indicate that Black-Scholes provided more precise outputs than the basic implementation of Monte Carlo simulation. Modification of the Monte Carlo algorithm is also proposed. The research shows the limitations and opportunities of the lower precision type usage. In order to benefit from the application in terms of the time of calculation improved algorithms can be implemented on GPU or FPGA. We conclude that under particular restrictions the lower precision calculation can be used in mathematical finance.

    Massively Parallelized Monte Carlo Simulation and Its Applications in Finance

    Get PDF
    In this paper, we propose, develop and implement a tool that increases the computational speed of exotic derivatives pricing at a fraction of the cost of traditional methods. Our paper focuses on investigating the computing efficiencies of GPU systems. We utilize the GPU’s natural parallelization capabilities to price financial instruments. We outline our implementation, solutions to practical complications arising during implementation and how much faster GPU systems are. Each step that we explore has a significant impact on the efficiency and performance of GPU pricing. Rather than speaking in theoretical, abstract terms, we detail each step in an attempt to give the reader a clear sense of what’s going on. Efficiency is one of the pillars of financial calculations. With the volume of risk calculations mandated by prudent risk management practices, even moderate improvements in calculation efficiency can translate into material changes in trading limits or savings in regulatory capital. This can make the difference between a growing, successful trading operation or an also-ran. Unfortunately, a decent algorithm written in VBA cannot calculate option prices at the same speed as a farm of computers, particularly if we must price the trade in less than 150 milliseconds using 10 million simulation paths. Fast forward from one trade to a book of several hundred thousand trades, many of which are exotic products. Not only is it necessary to price each trade, but we must do so in each of thousands of different market scenarios in order to calculate even basic risk measures such as Greeks and Value-at-Risk (VaR). At the end of the paper, we discuss how GPUs are currently used in the industry and their various advantages, including cost, time, accuracy and calculation frequency. In addition, we discuss the implementation challenges of GPU systems and the attention to detail that is required for memory allocation

    Estimating the Counterparty Risk Exposure by using the Brownian Motion Local Time

    Full text link
    In recent years, the counterparty credit risk measure, namely the default risk in \emph{Over The Counter} (OTC) derivatives contracts, has received great attention by banking regulators, specifically within the frameworks of \emph{Basel II} and \emph{Basel III.} More explicitly, to obtain the related risk figures, one has first obliged to compute intermediate output functionals related to the \emph{Mark-to-Market} (MtM) position at a given time t∈[0,T],t \in [0, T], T being a positive, and finite, time horizon. The latter implies an enormous amount of computational effort is needed, with related highly time consuming procedures to be carried out, turning out into significant costs. To overcome latter issue, we propose a smart exploitation of the properties of the (local) time spent by the Brownian motion close to a given value
    • 

    corecore