91 research outputs found

    Automatic generation of high-throughput systolic tree-based solvers for modern FPGAs

    Get PDF
    Tree-based models are a class of numerical methods widely used in financial option pricing, which have a computational complexity that is quadratic with respect to the solution accuracy. Previous research has employed reconfigurable computing with small degrees of parallelism to provide faster hardware solutions compared with general-purpose processing software designs. However, due to the nature of their vector hardware architectures, they cannot scale their compute resources efficiently, leaving them with pricing latency figures which are quadratic with respect to the problem size, and hence to the solution accuracy. Also, their solutions are not productive as they require hardware engineering effort, and can only solve one type of tree problems, known as the standard American option. This thesis presents a novel methodology in the form of a high-level design framework which can capture any common tree-based problem, and automatically generates high-throughput field-programmable gate array (FPGA) solvers based on proposed scalable hardware architectures. The thesis has made three main contributions. First, systolic architectures were proposed for solving binomial and trinomial trees, which due to their custom systolic data-movement mechanisms, can scale their compute resources efficiently to provide linear latency scaling for medium-size trees and improved quadratic latency scaling for large trees. Using the proposed systolic architectures, throughput speed-ups of up to 5.6X and 12X were achieved for modern FPGAs, compared to previous vector designs, for medium and large trees, respectively. Second, a productive high-level design framework was proposed, that can capture any common binomial and trinomial tree problem, and a methodology was suggested to generate high-throughput systolic solvers with custom data precision, where the methodology requires no hardware design effort from the end user. Third, a fully-automated tool-chain methodology was proposed that, compared to previous tree-based solvers, improves user productivity by removing the manual engineering effort of applying the design framework to option pricing problems. Using the productive design framework, high-throughput systolic FPGA solvers have been automatically generated from simple end-user C descriptions for several tree problems, such as American, Bermudan, and barrier options.Open Acces

    A parallel and pipelined implementation of a Pascal-simplex based two asset option pricer on FPGA using OpenCL

    Get PDF
    With the resurgence of hardware for financial technology, several methods for accelerating financial option pricing have been investigated. This paper presents the first architecture and implementation of a two-asset option pricer based on Pascal’s simplex, which takes advantage of the parallelism and pipelining offered by FPGA technology. The theory that this architecture is constructed from is based on a recombining multinomial tree approach which in turn is a generalization of the binomial tree model. Furthermore, we show that while a significant difficulty exists in efficiently maintaining the intermediate values required for the computation, a solution exists in the form of FIFOs. Our implementation, on an Intel Stratix 10 GX FPGA, is based on the OpenCL framework and can compute 6250 two asset option prices per second for a time step of 100 and the pipelining of the option value computation show a 25 times improvement when a 50-step pipeline is created

    Estimating lifetime effects of child development for economic evaluation: An exploration of methods and their application to a population screen for postnatal depression

    Get PDF
    Background: Early health interventions affecting child development can subsequently influence lifetime health and economic outcomes. These lifetime effects may be excluded from economic evaluation as empirical evidence covering the required time horizon is rarely available. One example is screening for postnatal depression where current guidelines do not account for lifetime effects despite evidence of a detrimental association between maternal depression and child development. Aims: To develop a methodological approach to estimate lifetime effects for economic evaluation and determine their influence on an evaluation assessing the cost-effectiveness of postnatal depression screening. Methods: Lifetime effects are estimated by linking results from two empirical studies. Firstly, growth curve models establish the effects of postnatal depression on development measures for children aged 3-11 using data from the Millennium Cohort Study. Secondly, child development measures are entered as explanatory variables in linear regression models predicting effects on lifetime health and economic outcomes using data from the 1970 British Cohort Study. An economic evaluation is conducted for scenarios which exclude/include lifetime effects to determine their influence on cost-effectiveness results. Findings: Postnatal depression was detrimentally associated with children’s cognitive and socioemotional development up to age 11. Detrimental changes in cognitive and socioemotional development were negatively associated with lifetime outcomes. Postnatal depression exposure was predicted to reduce children’s lifetime Quality Adjusted Life Years, increase healthcare and crime costs, and generate fewer monetary returns in education and employment. Cost-effectiveness results changed when including lifetime effects, leading to the recommendation of a screening strategy which treats a greater proportion of depressed mothers. Conclusions: Lifetime effects can influence cost-effectiveness results and their exclusion risks providing a partial analysis. This research demonstrates methods to estimate and include lifetime effects in economic evaluation. Similar approaches could be applied elsewhere to provide additional evidence for economic evaluation of other childhood interventions

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
    • …
    corecore