11 research outputs found

    Forecasting bitcoin volatility: Exploring the potential of deep learning

    Get PDF
    This study aims to evaluate forecasting properties of classic methodologies (ARCH and GARCH models) in comparison with deep learning methodologies (MLP, RNN, and LSTM architectures) for predicting Bitcoin's volatility. As a new asset class with unique characteristics, Bitcoin's high volatility and structural breaks make forecasting challenging. Based on 2753 observations from 08-09-2014 to 01-05-2022, this study focuses on Bitcoin logarithmic returns. Results show that deep learning methodologies have advantages in terms of forecast quality, although significant computational costs are required. Although both MLP and RNN models produce smoother forecasts with less fluctuation, they fail to capture large spikes. The LSTM architecture, on the other hand, reacts strongly to such movements and tries to adjust its forecast accordingly. To compare forecasting accuracy at different horizons MAPE, MAE metrics are used. Diebold-Mariano tests were conducted to compare the forecast, confirming the superiority of deep learning methodologies. Overall, this study suggests that deep learning methodologies could provide a promising tool for forecasting Bitcoin returns (and therefore volatility), especially for short-term horizons.info:eu-repo/semantics/publishedVersio

    Prediktív modellek teljesítményének vizsgálata Covid-19 és az orosz-ukrán háború időszakában

    Get PDF
    Tanulmányunkban arra a kérdésre keressük a választ, hogy mennyire hatékonyan lehet a mesterséges intelligencia segítségével előrejelezni a részvénypiaci trendeket a világ vezető részvénypiacain a 2010. 01. 01. és a 2022. 09. 16. közötti időszakban. A Covid-19 és az orosz–ukrán háború erőteljesen éreztette hatását a tőkepiacokon is, ezért egy rendkívül volatilitásintenzív környezetben folyt a vizsgálat. Az elemzés során három időintervallumon két különböző komplexitású gépi tanulási algoritmust (döntési fa, LSTM) és egy parametrikus statisztikai modellt (lineáris regresszió) alkalmaztunk. A kapott eredmények kiértékelését az átlagos abszolút százalékos hiba alapján (MAPE) értékeltük. Tanulmányunkban igazoltuk, hogy a prediktív modellek a kiemelt volatilitású időszakban jobban tudnak teljesíteni, mint a lineáris regresszió. Emellett fontos eredményünk, hogy az orosz–ukrán háborút követő időszakban jobban teljesítettek az előrejelző modellek, mint a Covid-19 kitörése után. Az árfolyam-előrejelzés a fundamentális és technikai elemzések során is fontos szerepet kaphat, beépíthető az algoritmikus kereskedés döntési szempontjai közé, azonban akár önmagában is alkalmas lehet a kereskedés automatizálására

    Predicting the state of synchronization of financial time series using cross recurrence plots

    Get PDF
    Cross-correlation analysis is a powerful tool for understanding the mutual dynamics of time series. This study introduces a new method for predicting the future state of synchronization of the dynamics of two financial time series. To this end, we use the cross recurrence plot analysis as a nonlinear method for quantifying the multidimensional coupling in the time domain of two time series and for determining their state of synchronization. We adopt a deep learning framework for methodologically addressing the prediction of the synchronization state based on features extracted from dynamically sub-sampled cross recurrence plots. We provide extensive experiments on several stocks, major constituents of the S &P100 index, to empirically validate our approach. We find that the task of predicting the state of synchronization of two time series is in general rather difficult, but for certain pairs of stocks attainable with very satisfactory performance (84% F1-score, on average)

    Analysis of the performance of predictive models during Covid-19 and the Russian-Ukrainian war

    Get PDF
    In our paper, we investigate how effectively artificial intelligence can be used to predict stock market trends in the world’s leading equity markets over the period 01/01/2010 to 09/16/2022. Covid-19 and the Russian-Ukrainian war have had a strong impact on the capital markets and therefore the study was conducted in a highly volatile environment. The analysis was performed on three time intervals, using two machine learning algorithms of different complexity (decision tree, LSTM) and a parametric statistical model (linear regression). The evaluation of the results obtained was based on mean absolute percentage error (MAPE). In our study, we show that predictive models can perform better than linear regression in the period of high volatility. Another important finding is that the predictive models performed better in the post-Russian-Ukrainian war period than after the outbreak of Covid-19. Stock market price forecasting can play an important role in fundamental and technical analysis, can be incorporated into the decision criteria of algorithmic trading, or can be used on its own to automate trading

    Predicting the state of synchronization of financial time series using cross recurrence plots

    Get PDF
    Cross-correlation analysis is a powerful tool for understanding the mutual dynamics of time series. This study introduces a new method for predicting the future state of synchronization of the dynamics of two financial time series. To this end, we use the cross recurrence plot analysis as a nonlinear method for quantifying the multidimensional coupling in the time domain of two time series and for determining their state of synchronization. We adopt a deep learning framework for methodologically addressing the prediction of the synchronization state based on features extracted from dynamically sub-sampled cross recurrence plots. We provide extensive experiments on several stocks, major constituents of the S &P100 index, to empirically validate our approach. We find that the task of predicting the state of synchronization of two time series is in general rather difficult, but for certain pairs of stocks attainable with very satisfactory performance (84% F1-score, on average).publishedVersionPeer reviewe

    Forecasting Nonlinear Systems with LSTM: Analysis and Comparison with EKF

    Get PDF
    This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors.Certain difficulties in path forecasting and filtering problems are based in the initial hypothesis of estimation and filtering techniques. Common hypotheses include that the system can be modeled as linear, Markovian, Gaussian, or all at one time. Although, in many cases, there are strategies to tackle problems with approaches that show very good results, the associated engineering process can become highly complex, requiring a great deal of time or even becoming unapproachable. To have tools to tackle complex problems without starting from a previous hypothesis but to continue to solve classic challenges and sharpen the implementation of estimation and filtering systems is of high scientific interest. This paper addresses the forecast–filter problem from deep learning paradigms with a neural network architecture inspired by natural language processing techniques and data structure. Unlike Kalman, this proposal performs the process of prediction and filtering in the same phase, while Kalman requires two phases. We propose three different study cases of incremental conceptual difficulty. The experimentation is divided into five parts: the standardization effect in raw data, proposal validation, filtering, loss of measurements (forecasting), and, finally, robustness. The results are compared with a Kalman filter, showing that the proposal is comparable in terms of the error within the linear case, with improved performance when facing non-linear systems.This research was funded by public research projects of Spanish Ministry of Economy and Competitivity (MINECO), reference TEC2017-88048-C2-2-R and by the Madrid Government under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M17), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation)

    Volatility forecasting with garch models and recurrent neural networks

    Get PDF
    The three main ways to estimate future volatilities include the implied volatility of option prices, time-series volatility models, and neural network models. This project investigates whether there are economically meaningful differences between those approaches. Seminal time-series models like the GARCH, as well as recurrent neural network models like the LSTM are investigated to forecast volatilities. An eventual informational advantage over the market’s expectation of future volatility in the form of implied volatility is sought after. Through trading strategies involving options, as well as investment vehicles that emulate the VIX, it is attempted to trade volatility in a profitable way

    Four essays on quantitative economics applications to volatility analysis in Emerging Markets and renewable energy projects

    Get PDF
    [ES]Las decisiones financieras se pueden dividir en decisiones de inversión y decisiones de financiación. En lo que respecta a las decisiones de inversión, la incertidumbre acerca de la dinámica futura de las variables económicas y de las financieras tiene un rol fundamental. Eso, se explica porque los retornos esperados por las empresas y por los inversionistas se pueden ver afectados por los movimientos adversos en los mercados financieros y por los altos niveles de volatilidad. Como consecuencia, resulta crucial realizar un adecuado análisis y modelación de la volatilidad para el proceso de toma de decisiones financieras, por parte de las empresas y el diseño de estrategias de inversión y cobertura por parte de los inversionistas. En este sentido, el estudio de la volatilidad se ha convertido en uno de los temas más interesantes de la investigación en finanzas. Lo anterior ha cobrado mayor relevancia en los últimos años, teniendo en cuenta el escenario de alta volatilidad e incertidumbre que afrontan los mercados a nivel global. Este documento tiene como objetivo abordar cuatro cuestiones centrales, las cuales están relacionadas con la volatilidad financiera como campo de investigación. Esas cuestiones son, la transmisión y spillovers de volatilidad en mercados emergentes, la calibración de la superficie de volatilidad para proyectos de energía renovable y el pronóstico de los rendimientos de activos energéticos y spillovers de volatilidad a través de técnicas de machine learning. En el primer capítulo del documento, se examinan los efectos de transmisión de volatilidad entre un índice de energía y un índice financiero para los Mercados Emergentes. En consecuencia, mediante el uso de un modelo DCC, se muestra que los efectos de transmisión de volatilidad entre los índices empleados para la crisis subprime y la crisis del COVID-19 fueron diferentes. Lo anteriormente dicho, considerando que la primera crisis se originó en el sector financiero y luego se extendió al resto de la economía, mientras que la segunda se originó en el sector real y posteriormente afectó al resto de la economía. Teniendo en cuenta que la relación entre la volatilidad de los mercados es cambiante en el tiempo, en el segundo capítulo se llevó a cabo un análisis dinámico de los spillovers de volatilidad entre materias primas, Bitcoin y un índice de Mercados Emergentes. Así, empleando la metodología propuesta por Diebold y Yilmaz (2012), se concluyó que los efectos de los spillovers de volatilidad entre los activos analizados no son constantes en dirección e intensidad a través del tiempo. En particular, para períodos de crisis como el de la pandemia del COVID-19, hay reversiones en la dirección de los spillovers de volatilidad debido al sector en el que se originó la crisis. Además, en este capítulo se explota la naturaleza dinámica de los spillovers de volatilidad. Por lo tanto, se planteó que el índice de spillovers de volatilidad propuesto por Diebold y Yilmaz puede ser usado como una medida para pronosticar periodos de alta turbulencia. Lo anterior se desarrolló a través de modelos econométricos tradicionales y de técnicas de machine learning. En el tercer capítulo del documento, se propone un modelo que predice los retornos de los precios del carbono y del petróleo. En este sentido, se desarrolló un modelo híbrido, el cual combina las proyecciones obtenidas a partir de diferentes técnicas de machine learning y modelos econométricos tradicionales, obteniéndose resultados los cuales muestran las ventajas de emplear modelos híbridos que incorporan técnicas de machine learning, exclusivamente, para pronosticar variables financieras. Finalmente, en el capítulo cuatro, se presenta una metodología para la estimación de la volatilidad en la valoración de proyectos de energías renovables mediante opciones reales. En esta metodología, la cual es una extensión del enfoque de volatilidad implícita empleada para las opciones financieras, la volatilidad de un proyecto es la volatilidad implícita obtenida a partir de la superficie de la volatilidad de empresas comparables, según una determinada fecha de valoración y dada la relación deuda-capital de un proyecto de energía renovable. En este análisis, se utilizó el modelo estocástico 'alfa-beta-rho' para calibrar la superficie de la volatilidad para la valoración mediante opciones reales. Por último, al final del documento se presentan las conclusiones derivadas de los capítulos mencionados, así como algunas recomendaciones para las futuras investigaciones. [EN]Financial decisions can be divided in investment and financing decisions. Concerning investment decisions, the uncertainty about the future dynamics of financial and economic variables has a central role, considering that the returns expected by firms and investors can be affected by the adverse movements in financial markets and their high volatility. In consequence, the adequate volatility analysis and modeling is crucial for the firm’s financial decision-making process and the design of investing and hedging strategies by investors. In this regard, the study of volatility has become one of the most interesting topics in finance research. The foregoing has become more relevant in recent years considering the scenario of high volatility and uncertainty faced by markets globally. This document aims to address four central issues related to financial volatility as a research area. These are, volatility transmission and spillovers in Emerging Markets, the calibration of the volatility surface for renewable energy projects and the forecast of energy assets returns and volatility spillovers through machine learning techniques. In the first chapter of the document, the volatility transmission effects between an energy index and a financial index for Emerging Markets are examined. Then, by using a DCC model, it is shown that the volatility transmission effects between the employed indices for the subprime crisis and the COVID-19 pandemic were different. This, considering that the former crisis originated in the financial sector and spread to the rest of the economy, while the second originated in the real sector and trasmitted to the rest of the economy posteriorly. Considering that the relationship between markets volatility is time-varying, in the second chapter, a dynamic analysis of volatility spillovers between commodities, Bitcoin and an Emerging Markets index is developed. Employing the methodology proposed by Diebold and Yilmaz (2012), it is concluded that the volatility spillovers effects between the analyzed assets is not constant in direction and intensity over time. In particular, for periods of crisis such as the COVID-19 pandemics, there are reversals in the direction of volatility spillovers due to the sector in which the crises originate. In addition, in this chapter the dynamic nature of volatility spillovers is exploited. Hence, the volatility spillover index proposed by Diebold and Yilmaz is forecasted to be used as a measure to anticipate high turbulence periods. This, through both traditional econometric models and machine learning techniques. In the third chapter, a model for the prediction of carbon and oil prices is proposed. In this sense, a hybrid model that ensembles the forecasts obtained from different machine learning techniques and traditional econometric models is developed, obtaining results that show the advantages of employing hybrid models which combine machine learning techniques, exclusively, to forecast financial variables. In Chapter four, a methodology for the estimation of volatility in renewable energy projects valuation through real options is presented. In this methodology, which is an extension of the implied volatility approach employed for financial options, the volatility of the project is the implied volatility obtained from the volatility surface of comparable firms for a certain valuation date and given debt-to-equity relation of a renewable energy project. In this analysis, the stochastic ‘alpha-beta-rho’ model is utilized to calibrate the volatility surface for real option valuation purposes. Finally, the conclusions derived from the mentioned chapters are presented at the end of the document as well as some recommendations for future research

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav
    corecore