15 research outputs found
Evaluating the Accuracy of Time-varying Beta. The Evidence from Poland
This paper empirically investigates various approaches to model time-varying systematic risk on the Polish capital market. A plenty of methods is examined in the developed markets and the Kalman filter approach is usually indicated as the best method for estimation of time-varying beta. However, there exists a gap in the studies for the emerging markets. In the paper we apply weekly data of fifteen stocks listed on the Warsaw Stock Exchange from banking and informatics sector. The sample starts at the beginning of 2001 and ends in 2015 including the hectic crisis period. We estimate beta within few competing approaches: two MGARCH models, BEKK and DCC, unobserved component model, and static beta from linear regression. All beta estimates are compared in the securities market line framework. We find that unobserved component beta together with beta from DCC model have higher predictive accuracy than beta from BEKK model or static beta. The beta estimates are positively correlated within the industry and negatively correlated for stocks from different sectors. Finally, the prediction of beta coefficients are more accurate for stocks from banking sector than for IT companies
Forward-Looking Volatility Estimation for Risk-Managed Investment Strategies during the COVID-19 Crisis
Under the impact of both increasing credit pressure and low economic returns characterizing developed countries, investment levels have decreased over recent years. Moreover, the recent turbulence caused by the COVID-19 crisis has accelerated the latter process. Within this scenario, we consider the so-called Volatility Target (VolTarget) strategy. In particular, we focus our attention on estimating volatility levels of a risky asset to perform a VolTarget simulation over two different time horizons. We first consider a 20 year period, from January 2000 to January 2020, then we analyse the last 12 months to emphasize the effects related to the COVID-19 virus\u2019s diffusion. We propose a hybrid algorithm based on the composition of a GARCH model with a Neural Network (NN) approach. Let us underline that, as an alternative to standard allocation methods based on realized and backward oriented volatilities, we exploited an innovative forward-looking estimation process exploiting a Machine Learning (ML) solution. Our solution provides a more accurate volatility estimation, allowing us to derive an effective investor risk-return profile during market crisis periods. Moreover, we show that, via a forward-looking VolTarget strategy while using an ML-based prediction as the input, the average outcome for an investment in a drawdown plan is more sustainable while representing an efficient risk-control solution for long time period investments
Data analytics enhanced component volatility model
Volatility modelling and forecasting have attracted many attentions in both finance and computation areas. Recent advances in machine learning allow us to construct complex models on volatility forecasting. However, the machine learning algorithms have been used merely as additional tools to the existing econometrics models. The hybrid models that specifically capture the characteristics of the volatility data have not been developed yet. We propose a new hybrid model, which is constructed by a low-pass filter, the autoregressive neural network and an autoregressive model. The volatility data is decomposed by the low-pass filter into long and short term components, which are then modelled by the autoregressive neural network and an autoregressive model respectively. The total forecasting result is aggregated by the outputs of two models. The experimental evaluations using one-hour and one-day realized volatility across four major foreign exchanges showed that the proposed model significantly outperforms the component GARCH, EGARCH and neural network only models in all forecasting horizons
Revisión sistemática de literatura: Modelos de pronóstico índice Standard and Poor´s 500 (S and P500)
Resumen: El Standard and Poor´s 500 es el índice bursátil más estudiado en la literatura, ya que representa el sector industrial de Estados Unidos, el objetivo de este trabajo es hacer una Revisión Sistemática de Literatura de gran parte de los artículos que tengan como objetivo la predicción de este índice, que pretende recoger las técnicas, criterios de desempeño, pruebas de validación más utilizados con el fin de agruparlos, clasificarlos y proponer una metodología que permita un avance más rápido en esta área del conocimiento, de igual manera también se pretende establecer cuáles son los modelos de pronóstico que mejores resultados ofrecen. Se encuentra que gran parte de los criterios utilizados para medir modelos de pronóstico no son apropiados para series financieras, también se concluye que es bastante complejo la comparación entre varios autores. / Abstract:The Standard and Poor's 500 index is the most studied in the literature, this is because it represents the U.S. industry, the aim of this paper is to make a Systematic Literature Review of most items that have the objective prediction of this index, which aims to collect technical performance criteria, validation tests commonly used to group them, sort them and propose a methodology to faster progress in this area of knowledge, just as also seeks to establish what the forecast models that offer better results. We find that many of the criteria used to measure forecast models are not appropriate for financial series, also concludes that it is quite complex comparison between various authors.Maestrí
Realized Volatility Forecasting with Neural Networks
In the last few decades, a broad strand of literature in finance has implemented artificial
neural networks as forecasting method. The major advantage of this approach
is the possibility to approximate any linear and nonlinear behaviors without knowing
the structure of the data generating process. This makes it suitable for forecasting
time series which exhibit long memory and nonlinear dependencies, like conditional
volatility. In this paper, I compare the predictive performance of feed-forward and recurrent
neural networks (RNN), particularly focusing on the recently developed Long
short-term memory (LSTM) network and NARX network, with traditional econometric
approaches. The results show that recurrent neural networks are able to outperform
all the traditional econometric methods. Additionally, capturing long-range dependence
through Long short-term memory and NARX models seems to improve the forecasting
accuracy also in a highly volatile framework
Market volatility : can machine learning methods enhance volatility forecasting?
This dissertation aims to test whether the use of machine learning (ML) techniques can improve
volatility forecasting accuracy. More specifically, if it can beat the best econometric model, the
Heterogeneous Autoregressive model of Realized Volatility (HAR-RV). Using S&P 500 Index
data from May-2007 to August-2022, the superiority of the HAR-RV was tested and attested
against competing econometric models EWMA and GARCH(1,1). Next, the performance of
the ML Artificial Neural Network algorithms Long Short-Term Memory (LSTM) and Gated
Recurrent Unit (GRU) are compared to the performance of the econometric models. Five
different variable sets are tested for the ML models. It is found that while both ML models are
able to beat the EWMA and GARCH(1,1) models by a significant margin, the HAR-RV model
still outperforms LSTM and GRU.
Moreover, an analysis is conduced on the models’ predictions on the period corresponding to
the Covid-19 crisis. The results did not show any evidence suggesting that ML methods have
a particular advantage at predicting during high volatility events.
Finally, a plausible cause that could undermine the remarkable qualities of the ML methods in
the aim of volatility forecasting is discussed. It is found that the rigorous set of conditions
needed to be met for the proper setup of ML models are very difficult to be met using financial
data, which hinders the aptitude of ML for this purpose.Esta tese visa testar se o uso de técnicas de Machine Learning (ML) pode melhorar a precisão
da previsão da volatilidade. Mais especificamente, se estes algoritmos conseguem superar o
melhor modelo econométrico, o Heterogeneous Autoregressive model of Realized Volatility
(HAR-RV). Usando dados do Índice S&P 500 de Maio-2007 a Agosto-2022, a superioridade
do HAR-RV perante os modelos econométricos concorrentes EWMA e GARCH(1,1), foi
testada e confirmada. Em seguida, o desempenho dos algoritmos ML de redes neurais artificiais
de Long Short-Term Memory (LSTM) e Gated Recurrent Unit (GRU) são comparados com o
desempenho dos modelos econométricos tradicionais. Cinco conjuntos diferentes de variáveis
são testados para os modelos ML. Verifica-se que enquanto ambos os modelos ML são capazes
de superar os modelos EWMA e GARCH(1,1) por uma margem significante, o modelo HARRV ainda tem um desempenho superior ao LSTM e ao GRU.
É ainda feita uma análise das previsões dos modelos durante o período correspondente à crise
do Covid-19. Os resultados não mostram qualquer evidência que sugira que os métodos ML
têm uma particular vantagem durante eventos de alta volatilidade.
Finalmente, é discutida uma possível causa que poderá debilitar as sofisticadas qualidades dos
métodos ML para a finalidade de previsão de volatilidade. Verifica-se que o conjunto rigoroso
de condições necessárias para a correcta configuração dos modelos ML é muito difícil de se
cumprir utilizando series temporais de volatilidade de mercado, o que prejudica a aptidão dos
modelos ML para esta finalidade