11 research outputs found

    Deep calibration of rough stochastic volatility models

    Get PDF
    Sparked by Alòs, León und Vives (2007); Fukasawa (2011, 2017); Gatheral, Jaisson und Rosenbaum (2018), so-called rough stochastic volatility models such as the rough Bergomi model by Bayer, Friz und Gatheral (2016) constitute the latest evolution in option price modeling. Unlike standard bivariate diffusion models such as Heston (1993), these non-Markovian models with fractional volatility drivers allow to parsimoniously recover key stylized facts of market implied volatility surfaces such as the exploding power-law behaviour of the at-the-money volatility skew as time to maturity goes to zero. Standard model calibration routines rely on the repetitive evaluation of the map from model parameters to Black-Scholes implied volatility, rendering calibration of many (rough) stochastic volatility models prohibitively expensive since there the map can often only be approximated by costly Monte Carlo (MC) simulations (Bennedsen, Lunde & Pakkanen, 2017; McCrickerd & Pakkanen, 2018; Bayer et al., 2016; Horvath, Jacquier & Muguruza, 2017). As a remedy, we propose to combine a standard Levenberg-Marquardt calibration routine with neural network regression, replacing expensive MC simulations with cheap forward runs of a neural network trained to approximate the implied volatility map. Numerical experiments confirm the high accuracy and speed of our approach

    Modelling the joint behaviour of interest rates and foreign exchange rates

    Get PDF
    Exposures to interest rate term structures in different currencies and their respective exchange rates are a challenge for risk management. In this paper we address this problem by extending the arbitrage-free Nelson Siegel model, an affine term structure model, to a multi-currency setting integrating exchange rate dynamics to allow forecasting of interest and exchange rates. We review the current state of research in term structure modelling and establish reasoning for using a three-factor model on interbank interest rates. Consequently, we provide the theoretical background for the dynamics of the state variables and the dependence of the exchange rate on the market risk premium. Moreover, to test the model empirically we establish an estimation framework using a Kalman filter. We show empirical results for different extensions of the arbitrage-free Nelson Siegel model. It is apparent that the forecasting performance is highly sensitive to the robustness of the estimation process.A exposição a diversas estruturas de taxas de juros em moedas diferentes e a sua respectiva taxa de câmbio é um desafio para a gestão de risco. Nesta tese, este problema é analisado usando uma extensão do modelo de Nelson-Siegel sem arbitragem. Este modelo assume linearidade das taxas de juros e é extendido de forma a integrar as taxas de juros e a respectiva taxa de câmbio permitindo não só a previsão de taxas de juro como também da taxa de câmbio. Primeiro, a partir de uma revisão de literatura alargada, estabelecemos que se deve usar um modelo com três factores nas taxas de juro interbancárias. Consequentemente, nós definimos o contexto teórico para as dinâmicas das variáveis de estado e a dependência da taxa de câmbio no prémio de risco de mercado. Adicionalmente, um filtro de Kalman é utilizado para testar o modelo. Os resultados são apresentados para diferentes extensões do modelo Nelson-Siegel. Concluímos que a previsão depende da robustez do processo de estimação

    Plataforma para negociação FOREX

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaThe growing democratization of financial markets fueled by new technologies openedthedoortonewinvestorsandresearchers. Marketschanged,continuouslynegotiationbeganandthenumberoffinancialordersroseexponentially. With the increase in flexibility and accessibility to markets, algorithmic trading grew at the retail level, with traders starting to implement their own algorithms in trading strategies. The use of machine learning algorithms and time series analysis became widely popular, adding complexity to trading strategies. In order to create and test profitable algorithms there are rules that must be followed. This dissertation presents the development of a new generation of research and trading system that aims to help researchers and traders to be more productive and efficient. It was developed as an event-driven backtest and live trading system with an innovative approach to sharing backtest reports. Also, by merging the technical analysis based trading with new techniques, and complying with the backtest paradigm, the aim is to provide a richer environment to users.A crescente democratização dos mercados financeiros alimentada por novas tecnologias abriu a porta a novos investidores e investigadores. Os mercados mudaram, a negociação contínua tornou-se uma realidade e o número de ordens financeiras aumentou exponencialmente. Com o aumento da flexibilidade e acessibilidade aos mercados, a negociação algorítmica a titulo individual cresceu, com investidores a implementar seus próprios algoritmos nas estratégias de negociação. O uso de algoritmos de aprendizagem automática e análise de séries temporais tornou-se comum, aumentando a complexidade das estratégias de negociação. Para criar e testar algoritmos lucrativos, existem regras que devem ser seguidas. Esta dissertação apresenta o desenvolvimento de uma nova geração desistemasdeinvestigaçãoenegociaçãocujoobjectivoéajudarinvestigadores e investidores a aumentar a produtividade e eficiência. Foi desenvolvido como um sistema de testes baseado em eventos e um sistema de negociação em tempo-real com uma abordagem inovadora para compartilhar relatórios. Além disso, ao fundir a negociação baseada na análise técnica com novas técnicas e cumprindo com o paradigma de testes, o objetivo é proporcionar um ambiente mais rico aos usuários

    Deep learning for trading and hedging in financial markets

    Get PDF
    Deep learning has achieved remarkable results in many areas, from image classification, language translation to question answering. Deep neural network models have proved to be good at processing large amounts of data and capturing complex relationships embedded in the data. In this thesis, we use deep learning methods to solve trading and hedging problems in the financial markets. We show that our solutions, which consist of various deep neural network models, could achieve better accuracies and efficiencies than many conventional mathematical-based methods. We use Technical Analysis Neural Network (TANN) to process high-frequency tick data from the foreign exchange market. Various technical indicators are calculated from the market data and fed into the neural network model. The model generates a classification label, which indicates the future movement direction of the FX rate in the short term. Our solution can surpass many well-known machine learning algorithms on classification accuracies. Deep Hedging models the relationship between the underlying asset and the prices of option contracts. We upgrade the pipeline by removing the restriction on trading frequency. With different levels of risk tolerances, the modified deep hedging model can propose various hedging solutions. These solutions form the Efficient Hedging Frontier (EHF), where their associated risk levels and returns are directly observable. We also show that combining a Deep Hedging model with a prediction algorithm ultimately increases the hedging performances. Implied volatility is the critical parameter for evaluating many financial derivatives. We propose a novel PCA Variational Auto-Enocder model to encode three independent features of implied volatility surfaces from the European stock markets. This novel encoding brings various benefits to generating and extrapolating implied volatility surfaces. It also enables the transformation of implied volatility surfaces from a stock index to a single stock, significantly improving the efficiency of derivatives pricing

    A Study of Adaptation Mechanisms for Simulation Algorithms

    Get PDF
    The performance of a program can sometimes greatly improve if it was known in advance the features of the input the program is supposed to process, the actual operating parameters it is supposed to work with, or the specific environment it is to run on. However, this information is typically not available until too late in the program’s operation to take advantage of it. This is especially true for simulation algorithms, which are sensitive to this late-arriving information, and whose role in the solution of decision-making, inference and valuation problems is crucial. To overcome this limitation we need to provide the flexibility for a program to adapt its behaviour to late-arriving information once it becomes available. In this thesis, I study three adaptation mechanisms: run-time code generation, model-specific (quasi) Monte Carlo sampling and dynamic computation offloading, and evaluate their benefits on Monte Carlo algorithms. First, run-time code generation is studied in the context of Monte Carlo algorithms for time-series filtering in the form of the Input-Adaptive Kalman filter, a dynamically generated state estimator for non-linear, non-Gaussian dynamic systems. The second adaptation mechanism consists of the application of the functional-ANOVA decomposition to generate model-specific QMC-samplers which can then be used to improve Monte Carlo-based integration. The third adaptive mechanism treated here, dynamic computation offloading, is applied to wireless communication management, where network conditions are assessed via option valuation techniques to determine whether a program should offload computations or carry them out locally in order to achieve higher run-time (and correspondingly battery-usage) efficiency. This ability makes the program well suited for operation in mobile environments. At their core, all these applications carry out or make use of (quasi) Monte Carlo simulations on dynamic Bayesian networks (DBNs). The DBN formalism and its associated simulation-based algorithms are of great value in the solution to problems with a large uncertainty component. This characteristic makes adaptation techniques like those studied here likely to gain relevance in a world where computers are endowed with perception capabilities and are expected to deal with an ever-increasing stream of sensor and time-series data

    Lucidi corso Finanza computazionale

    Get PDF
    corecore