4,621 research outputs found

    Acceleration-as-a-Service: Exploiting Virtualised GPUs for a Financial Application

    Get PDF
    'How can GPU acceleration be obtained as a service in a cluster?' This question has become increasingly significant due to the inefficiency of installing GPUs on all nodes of a cluster. The research reported in this paper is motivated to address the above question by employing rCUDA (remote CUDA), a framework that facilitates Acceleration-as-a-Service (AaaS), such that the nodes of a cluster can request the acceleration of a set of remote GPUs on demand. The rCUDA framework exploits virtualisation and ensures that multiple nodes can share the same GPU. In this paper we test the feasibility of the rCUDA framework on a real-world application employed in the financial risk industry that can benefit from AaaS in the production setting. The results confirm the feasibility of rCUDA and highlight that rCUDA achieves similar performance compared to CUDA, provides consistent results, and more importantly, allows for a single application to benefit from all the GPUs available in the cluster without loosing efficiency.Comment: 11th IEEE International Conference on eScience (IEEE eScience) - Munich, Germany, 201

    Lower Precision calculation for option pricing

    Get PDF
    The problem of options pricing is one of the most critical issues and fundamental building blocks in mathematical finance. The research includes deployment of lower precision type in two options pricing algorithms: Black-Scholes and Monte Carlo simulation. We make an assumption that the shorter the number used for calculations is (in bits), the more operations we are able to perform in the same time. The results are examined by a comparison to the outputs of single and double precision types. The major goal of the study is to indicate whether the lower precision types can be used in financial mathematics. The findings indicate that Black-Scholes provided more precise outputs than the basic implementation of Monte Carlo simulation. Modification of the Monte Carlo algorithm is also proposed. The research shows the limitations and opportunities of the lower precision type usage. In order to benefit from the application in terms of the time of calculation improved algorithms can be implemented on GPU or FPGA. We conclude that under particular restrictions the lower precision calculation can be used in mathematical finance.

    Accelerating Reconfigurable Financial Computing

    Get PDF
    This thesis proposes novel approaches to the design, optimisation, and management of reconfigurable computer accelerators for financial computing. There are three contributions. First, we propose novel reconfigurable designs for derivative pricing using both Monte-Carlo and quadrature methods. Such designs involve exploring techniques such as control variate optimisation for Monte-Carlo, and multi-dimensional analysis for quadrature methods. Significant speedups and energy savings are achieved using our Field-Programmable Gate Array (FPGA) designs over both Central Processing Unit (CPU) and Graphical Processing Unit (GPU) designs. Second, we propose a framework for distributing computing tasks on multi-accelerator heterogeneous clusters. In this framework, different computational devices including FPGAs, GPUs and CPUs work collaboratively on the same financial problem based on a dynamic scheduling policy. The trade-off in speed and in energy consumption of different accelerator allocations is investigated. Third, we propose a mixed precision methodology for optimising Monte-Carlo designs, and a reduced precision methodology for optimising quadrature designs. These methodologies enable us to optimise throughput of reconfigurable designs by using datapaths with minimised precision, while maintaining the same accuracy of the results as in the original designs

    Path Integral Calculations for Option Pricing

    Get PDF
    Since the initiation of options trading by the Chicago Board Options Exchange in 1973, the financial markets have experienced substantial growth in options trading. As of 2022, the trading volume reached an astounding 10.32 billion contracts, with the gross market value of over-the-counter derivatives, including options, amounting to 20.7trillionandanotionalvalueof20.7 trillion and a notional value of 618 trillion. This growth underscores the critical role of options trading in modern finance. The pricing of options is a highly mathematical task, influenced by multiple factors such as asset volatility, time until expiration, interest rates, and market unpredictability. Accurate pricing is essential not only for profit maximization but also for mitigating systemic risks, as evidenced by the 2007-2008 financial crisis where mispriced mortgage derivatives played a significant role. Consequently, there’s an increasing demand for more detailed and computationally efficient pricing methodologies. This study explores the application of the quantum mechanical path integral method introduced by R. Feynman to option pricing. This approach combines the probabilistic foundations of quantum mechanics with financial modeling. Traditionally used in physics to calculate particle transition probabilities with astonishing accuracy, path integrals offer also a method to model the paths of asset prices as a function of time. Numerical integration of path integrals with Monte Carlo simulations provides an interesting multidisciplinary method for simulating complex processes inherent in financial markets. A significant aspect of this research is in the comparison of the quantum mechanical path integral Monte Carlo simulation framework with the traditional option pricing methods. The results indicate that the path integral formalism can replicate well-known results, and can be easily extended to valuate more complicated options. Furthermore, the results of this research clarify the quantum mechanical aspects of option pricing and present both the theoretical framework and efficient numerical solutions in a comprehensible manner. Through this, the study aims to contribute to the advancement of financial modeling and risk management strategies, marking a step forward in the intersection of quantum physics and financial economics.Optiokaupankäynnin osuus finanssimarkkoinoilla on kasvanut merkittävästi Chicago Board Options Exchangen aloitettua kaupankäynnin optioilla vuonna 1973. Vuonna 2022 optioilla tehtyjen kauppojen määrä ylitti 10,32 miljardia. Näiden johdannaiskauppojen bruttoarvo ylitti 20,7 biljoonaa ja nimellisarvo 618 biljoonaa dollaria. Tämä kasvu korostaa optiokaupan merkistystä nykyaikaisilla rahoitusmarkkinoilla. Optioiden hinnoittelu on matemaattisesti haastava tehtävä, johon vaikuttaa monta tekijää, kuten volatiliteetti, voimassaoloaika, korot sekä markkinoiden arvaamaton luonne. Tarkka hinnoittelu on voittojen maksimoinnin lisäksi tärkeää järjestelmäriskien välttämiseksi, kuten opittiin vuosien 2007-2008 finanssikriisistä, jossa väärin hinnoitellut asuntolainajohdannaiset olivat merkittävässä roolissa. Näin ollen kysyntää entistä tarkemmille ja laskennallisesti tehokkaammille optioiden hinnoittelumenetelmille on runsaasti. Tässä tutkielmassa tarkastellaan R. Feynmanin esittämän kvanttimekaanisen polkuintegraalimenetelmän soveltamista optioiden hinnoitteluun. Menetelmä yhdistää kvanttimekaniikan probabilistiset lähtökohdat ja rahoitusteorian. Polkuintegraalimenetelmää on käytetty menestyksekkäästi hiukkasten tilasiirtymien laskemiseen, minkä lisäksi menetelmää voidaan soveltaa mallintamaan rahoitusintrumenttien arvon muutosta ajan funktiona. Polkuintegraalien numeerinen ratkaiseminen Monte Carlo -simulaatioita käyttämällä luo vahvan pohjan poikkitieteelliselle menetelmälle rahoitusmarkkinoiden monimutkaisten prosessien simuloimiseksi. Tutkielmassa vertaillaan kvanttimekaanista polkuintegraali-Monte Carlo -menetelmää perinteisiin optioiden hinnoittelussa käytettyihin menetelmiin. Tuloksista nähdään, että polkuintegraalimenetelmällä voidaan päästä samoihin tuloksiin tunnettujen mallien kanssa, ja että menetelmää voidaan soveltaa hyvin monimutkaisempien optioiden hinnoittelussa. Työn tarkoituksena on selventää menetelmän kvanttimekaanisista luonnetta ja esitellä taustalla oleva teoreettinen viitekehys, sekä selkeä ja tehokas menetelmä numeerisen ratkaisun mahdollistamiseksi. Tutkielman tavoitteena on edistää rahoitusmallinnuksen ja riskienhallinnan menetelmiä, ja vahvistaa kvanttimekaniikan ja taloustieteen välisiä yhteneväisyyksiä

    Improving Performance and Energy Efficiency of Heterogeneous Systems with rCUDA

    Full text link
    Tesis por compendio[ES] En la última década la utilización de la GPGPU (General Purpose computing in Graphics Processing Units; Computación de Propósito General en Unidades de Procesamiento Gráfico) se ha vuelto tremendamente popular en los centros de datos de todo el mundo. Las GPUs (Graphics Processing Units; Unidades de Procesamiento Gráfico) se han establecido como elementos aceleradores de cómputo que son usados junto a las CPUs formando sistemas heterogéneos. La naturaleza masivamente paralela de las GPUs, destinadas tradicionalmente al cómputo de gráficos, permite realizar operaciones numéricas con matrices de datos a gran velocidad debido al gran número de núcleos que integran y al gran ancho de banda de acceso a memoria que poseen. En consecuencia, aplicaciones de todo tipo de campos, tales como química, física, ingeniería, inteligencia artificial, ciencia de materiales, etc. que presentan este tipo de patrones de cómputo se ven beneficiadas, reduciendo drásticamente su tiempo de ejecución. En general, el uso de la aceleración del cómputo en GPUs ha significado un paso adelante y una revolución. Sin embargo, no está exento de problemas, tales como problemas de eficiencia energética, baja utilización de las GPUs, altos costes de adquisición y mantenimiento, etc. En esta tesis pretendemos analizar las principales carencias que presentan estos sistemas heterogéneos y proponer soluciones basadas en el uso de la virtualización remota de GPUs. Para ello hemos utilizado la herramienta rCUDA, desarrollada en la Universitat Politècnica de València, ya que multitud de publicaciones la avalan como el framework de virtualización remota de GPUs más avanzado de la actualidad. Los resutados obtenidos en esta tesis muestran que el uso de rCUDA en entornos de Cloud Computing incrementa el grado de libertad del sistema, ya que permite crear instancias virtuales de las GPUs físicas totalmente a medida de las necesidades de cada una de las máquinas virtuales. En entornos HPC (High Performance Computing; Computación de Altas Prestaciones), rCUDA también proporciona un mayor grado de flexibilidad de uso de las GPUs de todo el clúster de cómputo, ya que permite desacoplar totalmente la parte CPU de la parte GPU de las aplicaciones. Además, las GPUs pueden estar en cualquier nodo del clúster, independientemente del nodo en el que se está ejecutando la parte CPU de la aplicación. En general, tanto para Cloud Computing como en el caso de HPC, este mayor grado de flexibilidad se traduce en un aumento hasta 2x de la productividad de todo el sistema al mismo tiempo que se reduce el consumo energético en un 15%. Finalmente, también hemos desarrollado un mecanismo de migración de trabajos de la parte GPU de las aplicaciones que ha sido integrado dentro del framework rCUDA. Este mecanismo de migración ha sido evaluado y los resultados muestran claramente que, a cambio de una pequeña sobrecarga, alrededor de 400 milisegundos, en el tiempo de ejecución de las aplicaciones, es una potente herramienta con la que, de nuevo, aumentar la productividad y reducir el gasto energético del sistema. En resumen, en esta tesis se analizan los principales problemas derivados del uso de las GPUs como aceleradores de cómputo, tanto en entornos HPC como de Cloud Computing, y se demuestra cómo a través del uso del framework rCUDA, estos problemas pueden solucionarse. Además se desarrolla un potente mecanismo de migración de trabajos GPU, que integrado dentro del framework rCUDA, se convierte en una herramienta clave para los futuros planificadores de trabajos en clusters heterogéneos.[CA] En l'última dècada la utilització de la GPGPU(General Purpose computing in Graphics Processing Units; Computació de Propòsit General en Unitats de Processament Gràfic) s'ha tornat extremadament popular en els centres de dades de tot el món. Les GPUs (Graphics Processing Units; Unitats de Processament Gràfic) s'han establert com a elements acceleradors de còmput que s'utilitzen al costat de les CPUs formant sistemes heterogenis. La naturalesa massivament paral·lela de les GPUs, destinades tradicionalment al còmput de gràfics, permet realitzar operacions numèriques amb matrius de dades a gran velocitat degut al gran nombre de nuclis que integren i al gran ample de banda d'accés a memòria que posseeixen. En conseqüència, les aplicacions de tot tipus de camps, com ara química, física, enginyeria, intel·ligència artificial, ciència de materials, etc. que presenten aquest tipus de patrons de còmput es veuen beneficiades reduint dràsticament el seu temps d'execució. En general, l'ús de l'acceleració del còmput en GPUs ha significat un pas endavant i una revolució, però no està exempt de problemes, com ara poden ser problemes d'eficiència energètica, baixa utilització de les GPUs, alts costos d'adquisició i manteniment, etc. En aquesta tesi pretenem analitzar les principals mancances que presenten aquests sistemes heterogenis i proposar solucions basades en l'ús de la virtualització remota de GPUs. Per a això hem utilitzat l'eina rCUDA, desenvolupada a la Universitat Politècnica de València, ja que multitud de publicacions l'avalen com el framework de virtualització remota de GPUs més avançat de l'actualitat. Els resultats obtinguts en aquesta tesi mostren que l'ús de rCUDA en entorns de Cloud Computing incrementa el grau de llibertat del sistema, ja que permet crear instàncies virtuals de les GPUs físiques totalment a mida de les necessitats de cadascuna de les màquines virtuals. En entorns HPC (High Performance Computing; Computació d'Altes Prestacions), rCUDA també proporciona un major grau de flexibilitat en l'ús de les GPUs de tot el clúster de còmput, ja que permet desacoblar totalment la part CPU de la part GPU de les aplicacions. A més, les GPUs poden estar en qualsevol node del clúster, sense importar el node en el qual s'està executant la part CPU de l'aplicació. En general, tant per a Cloud Computing com en el cas del HPC, aquest major grau de flexibilitat es tradueix en un augment fins 2x de la productivitat de tot el sistema al mateix temps que es redueix el consum energètic en aproximadament un 15%. Finalment, també hem desenvolupat un mecanisme de migració de treballs de la part GPU de les aplicacions que ha estat integrat dins del framework rCUDA. Aquest mecanisme de migració ha estat avaluat i els resultats mostren clarament que, a canvi d'una petita sobrecàrrega, al voltant de 400 mil·lisegons, en el temps d'execució de les aplicacions, és una potent eina amb la qual, de nou, augmentar la productivitat i reduir la despesa energètica de sistema. En resum, en aquesta tesi s'analitzen els principals problemes derivats de l'ús de les GPUs com acceleradors de còmput, tant en entorns HPC com de Cloud Computing, i es demostra com a través de l'ús del framework rCUDA, aquests problemes poden solucionar-se. A més es desenvolupa un potent mecanisme de migració de treballs GPU, que integrat dins del framework rCUDA, esdevé una eina clau per als futurs planificadors de treballs en clústers heterogenis.[EN] In the last decade the use of GPGPU (General Purpose computing in Graphics Processing Units) has become extremely popular in data centers around the world. GPUs (Graphics Processing Units) have been established as computational accelerators that are used alongside CPUs to form heterogeneous systems. The massively parallel nature of GPUs, traditionally intended for graphics computing, allows to perform numerical operations with data arrays at high speed. This is achieved thanks to the large number of cores GPUs integrate and the large bandwidth of memory access. Consequently, applications of all kinds of fields, such as chemistry, physics, engineering, artificial intelligence, materials science, and so on, presenting this type of computational patterns are benefited by drastically reducing their execution time. In general, the use of computing acceleration provided by GPUs has meant a step forward and a revolution, but it is not without problems, such as energy efficiency problems, low utilization of GPUs, high acquisition and maintenance costs, etc. In this PhD thesis we aim to analyze the main shortcomings of these heterogeneous systems and propose solutions based on the use of remote GPU virtualization. To that end, we have used the rCUDA middleware, developed at Universitat Politècnica de València. Many publications support rCUDA as the most advanced remote GPU virtualization framework nowadays. The results obtained in this PhD thesis show that the use of rCUDA in Cloud Computing environments increases the degree of freedom of the system, as it allows to create virtual instances of the physical GPUs fully tailored to the needs of each of the virtual machines. In HPC (High Performance Computing) environments, rCUDA also provides a greater degree of flexibility in the use of GPUs throughout the computing cluster, as it allows the CPU part to be completely decoupled from the GPU part of the applications. In addition, GPUs can be on any node in the cluster, regardless of the node on which the CPU part of the application is running. In general, both for Cloud Computing and in the case of HPC, this greater degree of flexibility translates into an up to 2x increase in system-wide throughput while reducing energy consumption by approximately 15%. Finally, we have also developed a job migration mechanism for the GPU part of applications that has been integrated within the rCUDA middleware. This migration mechanism has been evaluated and the results clearly show that, in exchange for a small overhead of about 400 milliseconds in the execution time of the applications, it is a powerful tool with which, again, we can increase productivity and reduce energy foot print of the computing system. In summary, this PhD thesis analyzes the main problems arising from the use of GPUs as computing accelerators, both in HPC and Cloud Computing environments, and demonstrates how thanks to the use of the rCUDA middleware these problems can be addressed. In addition, a powerful GPU job migration mechanism is being developed, which, integrated within the rCUDA framework, becomes a key tool for future job schedulers in heterogeneous clusters.This work jointly supported by the Fundación Séneca (Agencia Regional de Ciencia y Tecnología, Región de Murcia) under grants (20524/PDC/18, 20813/PI/18 and 20988/PI/18) and by the Spanish MEC and European Commission FEDER under grants TIN2015-66972-C5-3-R, TIN2016-78799-P and CTQ2017-87974-R (AEI/FEDER, UE). We also thank NVIDIA for hardware donation under GPU Educational Center 2014-2016 and Research Center 2015-2016. The authors thankfully acknowledge the computer resources at CTE-POWER and the technical support provided by Barcelona Supercomputing Center - Centro Nacional de Supercomputación (RES-BCV-2018-3-0008). Furthermore, researchers from Universitat Politècnica de València are supported by the Generalitat Valenciana under Grant PROMETEO/2017/077. Authors are also grateful for the generous support provided by Mellanox Technologies Inc. Prof. Pradipta Purkayastha, from Department of Chemical Sciences, Indian Institute of Science Education and Research (IISER) Kolkata, is acknowledged for kindly providing the initial ligand and DNA structures.Prades Gasulla, J. (2021). Improving Performance and Energy Efficiency of Heterogeneous Systems with rCUDA [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/168081TESISCompendi
    corecore