5,600 research outputs found

    Forecasting Long-Term Government Bond Yields: An Application of Statistical and AI Models

    Get PDF
    This paper evaluates several artificial intelligence and classical algorithms on their ability of forecasting the monthly yield of the US 10-year Treasury bonds from a set of four economic indicators. Due to the complexity of the prediction problem, the task represents a challenging test for the algorithms under evaluation. At the same time, the study is of particular significance for the important and paradigmatic role played by the US market in the world economy. Four data-driven artificial intelligence approaches are considered, namely, a manually built fuzzy logic model, a machine learned fuzzy logic model, a self-organising map model and a multi-layer perceptron model. Their performance is compared with the performance of two classical approaches, namely, a statistical ARIMA model and an econometric error correction model. The algorithms are evaluated on a complete series of end-month US 10-year Treasury bonds yields and economic indicators from 1986:1 to 2004:12. In terms of prediction accuracy and reliability of the modelling procedure, the best results are obtained by the three parametric regression algorithms, namely the econometric, the statistical and the multi-layer perceptron model. Due to the sparseness of the learning data samples, the manual and the automatic fuzzy logic approaches fail to follow with adequate precision the range of variations of the US 10-year Treasury bonds. For similar reasons, the self-organising map model gives an unsatisfactory performance. Analysis of the results indicates that the econometric model has a slight edge over the statistical and the multi-layer perceptron models. This suggests that pure data-driven induction may not fully capture the complicated mechanisms ruling the changes in interest rates. Overall, the prediction accuracy of the best models is only marginally better than the prediction accuracy of a basic one-step lag predictor. This result highlights the difficulty of the modelling task and, in general, the difficulty of building reliable predictors for financial markets.interest rates; forecasting; neural networks; fuzzy logic.

    Option Pricing With Modular Neural Networks

    Full text link
    This paper investigates a nonparametric modular neural network (MNN) model to price the S&P-500 European call options. The modules are based on time to maturity and moneyness of the options. The option price function of interest is homogeneous of degree one with respect to the underlying index price and the strike price. When compared to an array of parametric and nonparametric models, the MNN method consistently exerts superior out-of-sample pricing performance. We conclude that modularity improves the generalization properties of standard feedforward neural network option pricing models (with and without the homogeneity hint)

    Longevity risk management through Machine Learning: state of the art

    Get PDF
    Longevity risk management is an area of the life insurance business where the use of Artificial Intelligence is still underdeveloped. The paper retraces the main results of the recent actuarial literature on the topic to draw attention to the potential of Machine Learning in predicting mortality and consequently improving the longevity risk quantification and management, with practical implication on the pricing of life products with long-term duration and lifelong guaranteed options embedded in pension contracts or health insurance products. The application of AI methodologies to mortality forecasts improves both fitting and forecasting of the models traditionally used. In particular, the paper presents the Classification and the Regression Tree framework and the Neural Network algorithm applied to mortality data. The literature results are discussed, focusing on the forecasting performance of the Machine Learning techniques concerning the classical model. Finally, a reflection on both the great potentials of using Machine Learning in longevity management and its drawbacks is offered

    Forecasting and modelling the VIX using Neural Networks

    Get PDF
    This study investigates the volatility forecasting ability of neural network models. In particular, we focus on the performance of Multi-layer Perceptron (MLP) and the Long Short Term (LSTM) Neural Networks in predicting the CBOE Volatility Index (VIX). The inputs into these models includes the VIX, GARCH(1,1) fitted values and various financial and macroeconomic explanatory variables, such as the S&P 500 returns and oil price. In addition, this study segments data into two sub-periods, namely a Calm and Crisis Period in the financial market. The segmentation of the periods caters for the changes in the predictive power of the aforementioned models, given the dierent market conditions. When forecasting the VIX, we show that the best performing model is found in the Calm Period. In addition, we show that the MLP has more predictive power than the LSTM

    Forecasting long-term government bond yields: an application of statistical and ai models

    Get PDF
    This paper evaluates several artificial intelligence and classical algorithms on their ability of forecasting the monthly yield of the US 10-year Treasury bonds from a set of four economic indicators. Due to the complexity of the prediction problem, the task represents a challenging test for the algorithms under evaluation. At the same time, the study is of particular significance for the important and paradigmatic role played by the US market in the world economy. Four data-driven artificial intelligence approaches are considered, namely, a manually built fuzzy logic model, a machine learned fuzzy logic model, a self-organising map model and a multi-layer perceptron model. Their performance is compared with the performance of two classical approaches, namely, a statistical ARIMA model and an econometric error correction model. The algorithms are evaluated on a complete series of end-month US 10-year Treasury bonds yields and economic indicators from 1986:1 to 2004:12. In terms of prediction accuracy and reliability of the modelling procedure, the best results are obtained by the three parametric regression algorithms, namely the econometric, the statistical and the multi-layer perceptron model. Due to the sparseness of the learning data samples, the manual and the automatic fuzzy logic approaches fail to follow with adequate precision the range of variations of the US 10-year Treasury bonds. For similar reasons, the self-organising map model gives an unsatisfactory performance. Analysis of the results indicates that the econometric model has a slight edge over the statistical and the multi-layer perceptron models. This suggests that pure data-driven induction may not fully capture the complicated mechanisms ruling the changes in interest rates. Overall, the prediction accuracy of the best models is only marginally better than the prediction accuracy of a basic one-step lag predictor. This result highlights the difficulty of the modelling task and, in general, the difficulty of building reliable predictors for financial markets

    Quantitative Analyses on Non-Linearities in Financial Markets

    Get PDF
    "The brief market plunge was just a small indicator of how complex and chaotic, in the formal sense, these systems have become. Our nancial system is so complicated and so interactive [...]. What happened in the stock market is just a little example of how things can cascade or how technology can interact with market panic" (Ben Bernanke, IHT, May 17, 2010) One of the most important issues in economics is modeling and fore- casting the uctuations that characterize both nancial and real mar- kets, such as interest rates, commodities and stock prices, output growth, unemployment, or exchange rate. There are mainly two op- posite views concerning these economic uctuations. According to the rst one, which was the predominant thought in the 1930s, the economic system is mainly linear and stable, only randomly hit by exogenous shocks. Ragnar Frisch, Eugen Slutsky and Jan Tinbergen, to cite a few, are important exponents of this view, and they demon- strated that the uctuations observed in the real business cycle may be produced in a stable linear system subject to an external sequence of random shocks. This view has been criticized starting from the 1940s and the 1950s, since it was not able to provide a strong eco- nomic explanation of observed uctuations. Richard Goodwin,John Hicks and Nicholas Kaldor introduced a nonlinear view of the econ- omy, showing that even in absence of external shocks, uctuations might arise. The economists then suggested an alternative within the exogenous approach, at rst by using the stochastic real busi- ness cycle models (Finn E. Kidland and Edward C. Prescott, 1982) and, more recently, by the adoption of the New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models, very adopted from the most important institutions and central banks. These models, however, have also been criticized for the assumption of the rational- ity of agents' behaviour, since rational expectations have been found to be systematically wrong in the business cycle. Expectations are of fundamental importance in economics and nance, since the agents' decisions about the future depends upon their expectations and their beliefs. It is in fact very unlikely that agents are perfect foresighters with rational expectations in a complex world, characterized by an irregular pattern of prices and quantities dealt in nancial markets, in which sophisticated nancial instruments are widespread. In the rst chapter of this dissertation, I will face the machine learn- ing technique, which is a nonlinear tool used for a better tting, fore- casting and clustering of dierent nancial time series and existing information in nancial markets. In particular, I will present a collec- tion of three dierent applications of these techniques, adapted from three dierent joint works: "Yield curve estimation under extreme conditions: do RBF net- works perform better?, joint with Pier Giuseppe Giribone, Marco Neelli, Marina Resta, published Anna Esposito, Marcos Faundez- Zanuy, Carlo Francesco Morabito, Eros Pasero Edrs, Multidisci- plinary Approaches to Neural Computing/Vol. 69/ WIRN 2017 and Chapter 22 in book "Neural Advances in Processing Non- linear Dynamic Signals", Springer; Interest rates term structure models and their impact on actuarial forecasting, joint with Pier Giuseppe Giribone and Marina Resta, presented at XVIII Quantitative Finance Workshop, University of Roma 3, January 2018; Applications of Kohonen Maps in financial markets: design of an automatic system for the detection of pricing anomalies, joint with Pier Giuseppe Giribone and published on Risk Management Magazine, 3-2017. In the second chapter, I will present the study A nancial market model with conrmation bias, in which nonlinearity is present as a result of the formation of heterogeneous expectations. This work is joint with Fabio Tramontana and it has been presented during the X MDEF (Dynamic Models in Economics and Finance) Workshop at University of Urbino Carlo Bo. Finally, the third chapter is a rielaboration of another joint paper, "The eects of negative nominal risk rates on the pricing of American Calls: some theoretical and numerical insights", with Pier Giuseppe Giribone and Marina Resta, published on Modern Economy 8(7), July 2017, pp 878-887. The problem of quantifying the value of early ex- ercise in an option written on equity is a complex mathematical issue that deals with continuous optimal control. In order to solve the con- tinuous dynamic optimization problem that involves high non linearity in the state variables, we have adopted a discretization scheme based on a stochastic trinomial tree. This methodology reveals a higher reliability and exibility than the traditional approaches based on approximated quasi-closed formulas in a context where financial markets are characterized by strong anomalies such as negative interest rates

    Clustering and Classification in Option Pricing

    Get PDF
    This paper reviews the recent option pricing literature and investigates how clustering and classification can assist option pricing models. Specifically, we consider non-parametric modular neural network (MNN) models to price the S&P-500 European call options. The focus is on decomposing and classifying options data into a number of sub-models across moneyness and maturity ranges that are processed individually. The fuzzy learning vector quantization (FLVQ) algorithm we propose generates decision regions (i.e., option classes) divided by ‘intelligent’ classification boundaries. Such an approach improves generalization properties of the MNN model and thereby increases its pricing accuracy

    Deep Curve-dependent PDEs for affine rough volatility

    Full text link
    We introduce a new deep-learning based algorithm to evaluate options in affine rough stochastic volatility models. We show that the pricing function is the solution to a curve-dependent PDE (CPDE), depending on forward curves rather than the whole path of the process, for which we develop a numerical scheme based on deep learning techniques. Numerical simulations suggest that the latter is extremely efficient, and provides a good alternative to classical Monte Carlo simulations.Comment: 20 pages, 4 figures, 4 table
    • …
    corecore