63 research outputs found

    Non-Parametric Extraction of Implied Asset Price Distributions

    Get PDF
    Extracting the risk neutral density (RND) function from option prices is well defined in principle, but is very sensitive to errors in practice. For risk management, knowledge of the entire RND provides more information for Value-at-Risk (VaR) calculations than implied volatility alone [1]. Typically, RNDs are deduced from option prices by making a distributional assumption, or relying on implied volatility [2]. We present a fully non-parametric method for extracting RNDs from observed option prices. The aim is to obtain a continuous, smooth, monotonic, and convex pricing function that is twice differentiable. Thus, irregularities such as negative probabilities that afflict many existing RND estimation techniques are reduced. Our method employs neural networks to obtain a smoothed pricing function, and a central finite difference approximation to the second derivative to extract the required gradients. This novel technique was successfully applied to a large set of FTSE 100 daily European exercise (ESX) put options data and as an Ansatz to the corresponding set of American exercise (SEI) put options. The results of paired t-tests showed significant differences between RNDs extracted from ESX and SEI option data, reflecting the distorting impact of early exercise possibility for the latter. In particular, the results for skewness and kurtosis suggested different shapes for the RNDs implied by the two types of put options. However, both ESX and SEI data gave an unbiased estimate of the realised FTSE 100 closing prices on the options' expiration date. We confirmed that estimates of volatility from the RNDs of both types of option were biased estimates of the realised volatility at expiration, but less so than the LIFFE tabulated at-the-money implied volatility.Comment: Paper based on Application of Physics in Financial Analysis,APFA5, Conference Presentation, Torino, Italy. 11.5 Page

    Neural network interpolation of the magnetic field for the LISA Pathfinder Diagnostics Subsystem

    Full text link
    LISA Pathfinder is a science and technology demonstrator of the European Space Agency within the framework of its LISA mission, which aims to be the first space-borne gravitational wave observatory. The payload of LISA Pathfinder is the so-called LISA Technology Package, which is designed to measure relative accelerations between two test masses in nominal free fall. Its disturbances are monitored and dealt by the diagnostics subsystem. This subsystem consists of several modules, and one of these is the magnetic diagnostics system, which includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at the positions of the test masses. However, since the magnetometers are located far from the positions of the test masses, the magnetic field at their positions must be interpolated. It has been recently shown that because there are not enough magnetic channels, classical interpolation methods fail to derive reliable measurements at the positions of the test masses, while neural network interpolation can provide the required measurements at the desired accuracy. In this paper we expand these studies and we assess the reliability and robustness of the neural network interpolation scheme for variations of the locations and possible offsets of the magnetometers, as well as for changes in environmental conditions. We find that neural networks are robust enough to derive accurate measurements of the magnetic field at the positions of the test masses in most circumstances

    Non-linear versus non-gaussian volatility models

    Get PDF
    One of the most challenging topics in financial time series analysis is the modeling of conditional variances of asset returns. Although conditional variances are not directly observable there are numerous approaches in the literature to overcome this problem and to predict volatilities on the basis of historical asset returns. The most prominent approach is the class of GARCH models where conditional variances are governed by a linear autoregressive process of past squared returns and variances. Recent research in this field, however, has focused on modeling asymmetries of conditional variances by means of non-linear models. While there is evidence that such an approach improves the fit to empirical asset returns, most non-linear specifications assume conditional normal distributions and ignore the importance of alternative models. Concentrating on the distributional assumptions is, however, essential since asset returns are characterized by excess kurtosis and hence fat tails that cannot be explained by models with suffcient heteroskedasticity. In this paper we take up the issue of returns' distributions and contrast it with the specification of non-linear GARCH models. We use daily returns for the Dow Jones Industrial Average over a large period of time and evaluate the predictive power of different linear and non-linear volatility specifications under alternative distributional assumptions. Our empirical analysis suggests that while non-linearities do play a role in explaining the dynamics of conditional variances, the predictive power of the models does also depend on the distributional assumptions. (author's abstract)Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science

    Primer on using neural networks for forecasting market variables

    Get PDF
    Author's OriginalAbility to forecast market variables is critical to analysts, economists and investors. Among other uses, neural networks are gaining in popularity in forecasting market variables. They are used in various disciplines and issues to map complex relationships. We present a primer for using neural networks for forecasting market variables in general, and in particular, forecasting volatility of the S&P 500 Index futures prices. We compare volatility forecasts from neural networks with implied volatility from S&P 500 Index futures options using the Barone-Adesi and Whaley (BAW) model for pricing American options on futures. Forecasts from neural networks outperform implied volatility forecasts. Volatility forecasts from neural networks are not found to be significantly different from realized volatility. Implied volatility forecasts are found to be significantly different from realized volatility in two of three cases. A revised version of this paper has since been published in the Journal of Business Research. Please use this version in your citations.Hamid, S. A. & Iqbal, Zahid. (2004). Using Neural Networks for Forecasting Volatility of S&P 500 Index Futures Prices. Journal of Business Research, 57(10), 1116-1125

    Risk-neutral density extraction from option prices. Improved pricing with mixture density networks.

    Get PDF
    One of the central goals in finance is to find better models for pricing and hedging financial derivatives such as call and put options. We present a semi-nonparametric approach to risk-neutral density extraction from option prices which is based on an extension of the concept of mixture density networks. The central idea is to model the shape of the risk-neutral density in a flexible, non-linear way as a function of the time horizon. Thereby, stylized facts such as negative skewness and excess kurtosis are captured. The approach is applied to a very large set of intraday options data on the FTSE 100 recorded at LIFFE. It is shown to yield significantly better results in terms of out-of-sample pricing in comparison to the basic Black-Scholes model and to an extended model adjusting the skewness and kurtosis terms. From the perspective of risk management, the extracted risk-neutral densities provide valuable information about market expectations. (author's abstract)Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science

    Volatility prediction with mixture density networks

    Get PDF
    Despite the lack of a precise definition of volatility in finance, the estimation of volatility and its prediction is an important problem. In this paper we compare the performance of standard volatility models and the performance of a class of neural models, i.e. mixture density networks (MDNs). First experimental results indicate the importance of long-term memory of the models as well as the benefit of using non-gaussian probability densities for practical applications. (author's abstract)Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science

    Identifying stochastic processes with mixture density networks

    Get PDF
    In this paper we investigate the use of mixture density networks (MDNs) for identifying complex stochastic processes. Regular multilayer perceptrons (MLPs), widely used in time series processing, assume a gaussian conditional noise distribution with constant variance, which is unrealistic in many applications, such as financial time series (which are known to be heteroskedastic). MDNs extend this concept to the modeling of time-varying probability density functions (pdfs) describing the noise as a mixture of gaussians, the parameters of which depend on the input. We apply this method to identifying the process underlying daily ATX (Austrian stock exchange index) data. The results indicate that MDNs modeling a non-gaussian conditional pdf tend to be significantly better than traditional linear methods of estimating variance (ARCH) and also better than merely assuming a conditional gaussian distribution. (author's abstract)Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science

    On non-linear, stochastic dynamics in economic and financial time series

    Get PDF
    The search for deterministic chaos in economic and financial time series has attracted much interest over the past decade. However, clear evidence of chaotic structures is usually prevented by large random components in the time series. In the first part of this paper we show that even if a sophisticated algorithm estimating and testing the positivity of the largest Lyapunov exponent is applied to time series generated by a stochastic dynamical system or a return series of a stock index, the results are difficult to interpret. We conclude that the notion of sensitive dependence on initial conditions as it has been developed for deterministic dynamics, can hardly be transfered into a stochastic context. Therefore, in the second part of the paper our starting point for measuring dependencies for stochastic dynamics is a distributional characterization of the dynamics, e.g. by heteroskedastic models for economic and financial time series. We adopt a sensitivity measure proposed in the literature which is an information-theoretic measure of the distance between probability density functions. This sensitivity measure is well defined for stochastic dynamics, and it can be calculated analytically for the classes of stochastic dynamics with conditional normal distributions of constant and state-dependent variance. In particular, heteroskedastic return series models such as ARCH and GARCH models are investigated. (author's abstract)Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science

    A symbolic dynamics approach to volatility prediction

    Get PDF
    We consider the problem of predicting the direction of daily volatility changes in the Dow Jones Industrial Average (DJIA). This is accomplished by quantizing a series of historic volatility changes into a symbolic stream over 2 or 4 symbols. We compare predictive performance of the classical fixed-order Markov models with that of a novel approach to variable memory length prediction (called prediction fractal machine, or PFM) which is able to select very specific deep prediction contexts (whenever there is a sufficient support for such contexts in the training data). We learn that daily volatility changes of the DJIA only exhibit rather shallow finite memory structure. On the other hand, a careful selection of quantization cut values can strongly enhance predictive power of symbolic schemes. Results on 12 non-overlapping epochs of the DJIA strongly suggest that PFMs can outperform both traditional Markov models and (continuous-valued) GARCH models in the task of predicting volatility one time-step ahead. (author's abstract)Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science
    • …
    corecore