70 research outputs found
Pricing options and computing implied volatilities using neural networks
This paper proposes a data-driven approach, by means of an Artificial Neural
Network (ANN), to value financial options and to calculate implied volatilities
with the aim of accelerating the corresponding numerical methods. With ANNs
being universal function approximators, this method trains an optimized ANN on
a data set generated by a sophisticated financial model, and runs the trained
ANN as an agent of the original solver in a fast and efficient way. We test
this approach on three different types of solvers, including the analytic
solution for the Black-Scholes equation, the COS method for the Heston
stochastic volatility model and Brent's iterative root-finding method for the
calculation of implied volatilities. The numerical results show that the ANN
solver can reduce the computing time significantly
Recommended from our members
Schnelle Löser für partielle Differentialgleichungen
[no abstract available
Recommended from our members
Hybrid Analog-Digital Co-Processing for Scientific Computation
In the past 10 years computer architecture research has moved to more heterogeneity and less adherence to conventional abstractions. Scientists and engineers hold an unshakable belief that computing holds keys to unlocking humanity's Grand Challenges. Acting on that belief they have looked deeper into computer architecture to find specialized support for their applications. Likewise, computer architects have looked deeper into circuits and devices in search of untapped performance and efficiency. The lines between computer architecture layers---applications, algorithms, architectures, microarchitectures, circuits and devices---have blurred. Against this backdrop, a menagerie of computer architectures are on the horizon, ones that forgo basic assumptions about computer hardware, and require new thinking of how such hardware supports problems and algorithms.
This thesis is about revisiting hybrid analog-digital computing in support of diverse modern workloads. Hybrid computing had extensive applications in early computing history, and has been revisited for small-scale applications in embedded systems. But architectural support for using hybrid computing in modern workloads, at scale and with high accuracy solutions, has been lacking.
I demonstrate solving a variety of scientific computing problems, including stochastic ODEs, partial differential equations, linear algebra, and nonlinear systems of equations, as case studies in hybrid computing. I solve these problems on a system of multiple prototype analog accelerator chips built by a team at Columbia University. On that team I made contributions toward programming the chips, building the digital interface, and validating the chips' functionality. The analog accelerator chip is intended for use in conjunction with a conventional digital host computer.
The appeal and motivation for using an analog accelerator is efficiency and performance, but it comes with limitations in accuracy and problem sizes that we have to work around.
The first problem is how to do problems in this unconventional computation model. Scientific computing phrases problems as differential equations and algebraic equations. Differential equations are a continuous view of the world, while algebraic equations are a discrete one. Prior work in analog computing mostly focused on differential equations; algebraic equations played a minor role in prior work in analog computing. The secret to using the analog accelerator to support modern workloads on conventional computers is that these two viewpoints are interchangeable. The algebraic equations that underlie most workloads can be solved as differential equations,
and differential equations are naturally solvable in the analog accelerator chip. A hybrid analog-digital computer architecture can focus on solving linear and nonlinear algebra problems to support many workloads.
The second problem is how to get accurate solutions using hybrid analog-digital computing. The reason that the analog computation model gives less accurate solutions is it gives up representing numbers as digital binary numbers, and instead uses the full range of analog voltage and current to represent real numbers. Prior work has established that encoding data in analog signals gives an energy efficiency advantage as long as the analog data precision is limited. While the analog accelerator alone may be useful for energy-constrained applications where inputs and outputs are imprecise, we are more interested in using analog in conjunction with digital for precise solutions. This thesis gives novel insight that the trick to do so is to solve nonlinear problems where low-precision guesses are useful for conventional digital algorithms.
The third problem is how to solve large problems using hybrid analog-digital computing. The reason the analog computation model can't handle large problems is it gives up step-by-step discrete-time operation, instead allowing variables to evolve smoothly in continuous time. To make that happen the analog accelerator works by chaining hardware for mathematical operations end-to-end. During computation analog data flows through the hardware with no overheads in control logic and memory accesses. The downside is then the needed hardware size grows alongside problem sizes. While scientific computing researchers have for a long time split large problems into smaller subproblems to fit in digital computer constraints, this thesis is a first attempt to consider these divide-and-conquer algorithms as an essential tool in using the analog model of computation.
As we enter the post-Moore’s law era of computing, unconventional architectures will offer specialized models of computation that uniquely support specific problem types. Two prominent examples are deep neural networks and quantum computers. Recent trends in computer science research show these unconventional architectures will soon have broad adoption. In this thesis I show another specialized, unconventional architecture is to use analog accelerators to solve problems in scientific computing. Computer architecture researchers will discover other important models of computation in the future. This thesis is an example of the discovery process, implementation, and evaluation of how an unconventional architecture supports specialized workloads
G-CSC Report 2010
The present report gives a short summary of the research of the Goethe Center for Scientific Computing (G-CSC) of the Goethe University Frankfurt. G-CSC aims at developing and applying methods and tools for modelling and numerical simulation of problems from empirical science and technology. In particular, fast solvers for partial differential equations (i.e. pde) such as robust, parallel, and adaptive multigrid methods and numerical methods for stochastic differential equations are developed. These methods are highly adanvced and allow to solve complex problems..
The G-CSC is organised in departments and interdisciplinary research groups. Departments are localised directly at the G-CSC, while the task of interdisciplinary research groups is to bridge disciplines and to bring scientists form different departments together. Currently, G-CSC consists of the department Simulation and Modelling and the interdisciplinary research group Computational Finance
Boosting the performance of remote GPU virtualization using InfiniBand Connect-IB and PCIe 3.0
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] A clear trend has emerged involving the acceleration of scientific applications by using GPUs. However, the capabilities of these devices are still generally underutilized. Remote GPU virtualization techniques can help increase GPU utilization rates, while reducing acquisition and maintenance costs. The overhead of using a remote GPU instead of a local one is introduced mainly by the difference in performance between the internode network and the intranode PCIe link. In this paper we show how using the new InfiniBand Connect-IB network adapters (attaining similar throughput to that of the most recently emerged GPUs) boosts the performance of remote GPU virtualization, reducing the overhead to a mere 0.19% in the application tested.This work was funded by the Generalitat Valenciana under Grant PROMETEOII/2013/009 of the PROMETEO program phase II. This material is based upon work supported by the U. S. Department of Energy, Office of Science, Advanced Scientific Computing Research (SC-21), under Contract No. DE-AC02-06CH11357. Authors from the Universitat Politècnica de València and Universitat Jaume I are grateful for the generous support provided by Mellanox Technologies.Reaño González, C.; Silla Jiménez, F.; Peña Monferrer, AJ.; Shainer, G.; Schultz, S.; Castelló Gimeno, A.; Quintana Orti, ES.... (2014). Boosting the performance of remote GPU virtualization using InfiniBand Connect-IB and PCIe 3.0. En 2014 IEEE International Conference on Cluster Computing (CLUSTER). IEEE. 266-267. doi:10.1109/CLUSTER.2014.6968737S26626
Analysis and numerical methods for stochastic volatility models in valuation of financial derivatives
[Abstract]
The main objective of this thesis concerns to the study of the SABR stochastic volatility
model for the underlyings (equity or interest rates) in order to price several market
derivatives. When dealing with interest rate derivatives the SABR model is joined
with the LIBOR market model (LMM) which is the most popular interest rate model
in our days. In order to price derivatives we take advantage not only of Monte Carlo
algorithms but also of the numerical resolution of the partial di erential equations
(PDEs) associated with these models. The PDEs related to SABR/LIBOR market
models are high dimensional in space. In order to cope with the curse of dimensionality
we will take advantage of sparse grids. Furthermore, a detailed discussion about
the calibration of the parameters of these models to market prices is included. To this
end the Simulated Annealing global stochastic minimization algorithm is proposed.
The above mentioned algorithms involve a high computational cost. In order
to price derivatives and calibrate the models as soon as possible we will make use
of high performance computing (HPC) techniques (multicomputers, multiprocessors
and GPUs).
Finally, we design a novel algorithm based on Least-Squares Monte Carlo (LSMC)
in order to approximate the solution of Backward Stochastic Di erential Equations
(BSDEs).[Resumen]
El objetivo principal de la tesis se centra en el estudio del modelo de volatilidad
estocástica SABR para los subyacentes (activos o tipos de interés) con vista a la
valoración de diferentes productos derivados. En el caso de los derivados de tipos de
interés, el modelo SABR se combina con el modelo de mercado de tipos de interés más
popular en estos momentos, el LIBOR market model (LMM). Los métodos numéricos
de valoración son fundamentalmente de tipo Monte Carlo y la resolución numérica
de los modelos de ecuaciones en derivadas parciales (EDPs) correspondientes. Las
EDPs asociadas a modelos SABR/LIBOR tienen alta dimensión en espacio, por lo
que se estudian técnicas de sparse grid para vencer la maldición de la dimensión.
Además, se discute detalladamente cómo calibrar los parámetros de los modelos a las
cotizaciones de mercado, para lo cual se propone el uso del algoritmo de optimización
global estocástica Simulated Annealing.
Los algoritmos citados tienen un alto coste computacional. Con el objetivo de
que tanto las valoraciones como las calibraciones se hagan en el menor tiempo posible
se emplean diferentes técnicas de computación de altas prestaciones (multicomputadores,
multiprocesadores y GPUs.)
Finalmente se dise~na un nuevo algoritmo basado en Least-Squares Monte Carlo
(LSMC) para aproximar la solución de Backward Stochastic Differential Equations
(BSDEs).[Resumo]
O obxectivo principal da tese céntrase no estudo do modelo de volatilidade estocástica
SABR para os subxacentes (activos ou tipos de xuro) con vista á valoración de diferentes
produtos derivados. No caso dos derivados de tipos de xuro, o modelo SABR
combínase co modelo de mercado de tipos de xuro máis popular nestos momentos, o
LIBOR market model (LMM). Os métodos numéricos de valoración son fundamentalmente
de tipo Monte Carlo e a resolución numérica dos modelos de ecuacións
en derivadas parciais (EDPs) correspondentes. As EDPs asociadas aos modelos
SABR/LIBOR te~nen alta dimensión en espazo, polo que se estudan técnicas de sparse
grid para vencer a maldición da dimensión. Ademais, discútese detalladamente como
calibrar os parámetros dos modelos ás cotizacións de mercado, para o cal se propón
o emprego do algoritmo de optimización global estocástica Simulated Annealing.
Os algoritmos citados te~nen un alto custo computacional. Co obxectivo de que
tanto as valoracións como as calibracións se fagan no menor tempo posible empréganse
diferentes técnicas de computación de altas prestacións (multicomputadores, multiprocesadores
e GPUs.)
Finalmente deséñase un novo algoritmo baseado en Least-Squares Monte Carlo
(LSMC) para aproximar a solución de Backward Stochastic Differential Equations
(BSDEs)
Meshless Methods for Option Pricing and Risks Computation
In this thesis we price several financial derivatives by means of radial basis functions. Our main contribution consists in extending the usage of said numerical methods to the pricing of more complex derivatives - such as American and basket options with barriers - and in computing the associated risks. First, we derive the mathematical expressions for the prices and the Greeks of given options; next, we implement the corresponding numerical algorithm in MATLAB and calculate the results. We compare our results to the most common techniques applied in practice such as Finite Differences and Monte Carlo methods. We mostly use real data as input for our examples. We conclude radial basis functions offer a valid alternative to current pricing methods, especially because of the efficiency deriving from the free, direct calculation of risks during the pricing process. Eventually, we provide suggestions for future research by applying radial basis function for an implied volatility surface reconstruction
Паралельні алгоритми моделювання процесу фільтраційної консолідації під дією двокомпонентного розчину
У статті розглядається задача моделювання процесу фільтраційної консолідації ґрунтів під дією двокомпонентного розчину. Математична модель таких процесів в одновимірному випадку поширюється на тривимірний та пропонується ряд паралельних алгоритмів розв’язання задач щодо неї, зокрема багатопоточний алгоритм, алгоритми для кластерних систем та графічних процесорів (GPU).Problem of modeling filtration consolidation processes in soils under the influence of bi-component solution has been considered in the paper. One-dimensional mathematical model of such processes has been extended to three-dimensional case and a set of parallel algorithms for solving problems about it have been proposed, in particular, multithreaded algorithms, algorithms for cluster systems and graphical processors (GPU)
- …