240 research outputs found
A Multiple Indicators Model for Volatility Using Intra-Daily Data
Many ways exist to measure and model financial asset volatility. In principle, as the frequency of the data increases, the quality of forecasts should improve. Yet, there is no consensus about a true' or best' measure of volatility. In this paper we propose to jointly consider absolute daily returns, daily high-low range and daily realized volatility to develop a forecasting model based on their conditional dynamics. As all are non-negative series, we develop a multiplicative error model that is consistent and asymptotically normal under a wide range of specifications for the error density function. The estimation results show significant interactions between the indicators. We also show that one-month-ahead forecasts match well (both in and out of sample) the market-based volatility measure provided by an average of implied volatilities of index options as measured by VIX.
Vector Multiplicative Error Models: Representation and Inference
The Multiplicative Error Model introduced by Engle (2002) for positive valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support. In this paper we propose a multi-variate extension of such a model, by taking into consideration the possibility that the vector innovation process be contemporaneously correlated. The estimation procedure is hindered by the lack of probability density functions for multivariate positive valued random variables. We suggest the use of copulafunctions and of estimating equations to jointly estimate the parameters of the scale factors and of the correlations of the innovation processes. Empirical applications on volatility indicators are used to illustrate the gains over the equation by equation procedure.
Flexible Tool for Model Building: the Relevant Transformation of the Inputs Network Approach (RETINA)
A new method, called Relevant Transformation of the Inputs Network Approach (RETINA) isproposed as a tool for model building. It is designed around flexibility (with nonlinear transformations of the predictors of interest), selective search within the range of possible models, out-of-sample forecasting ability and computational simplicity. In tests on simulated data, it shows both a high rate of successful retrieval of the DGP which increases with the sample size and a good performance relative to other alternative procedures. A telephone service demand model is built to show how the procedure applies on real data.Relevant Transformation of the Inputs Network Approach (RETINA), Economics models
Copycats and Common Swings: The Impact of the Use of Forecasts in Information Sets
This paper presents evidence, using data from Consensus Forecasts, that there is an "attraction" to conform to the mean forecasts; in other words, views expressed by other forecasters in the previous period influence individuals' current forecast. The paper then discusses--and provides further evidence on--two important implications of this finding. The first is that the forecasting performance of these groups may be severely affected by the detected imitation behavior and lead to convergence to a value that is not the "right" target. Second, since the forecasts are not independent, the common practice of using the standard deviation from the forecasts' distribution, as if they were standard errors of the estimated mean, is not warranted. Copyright 2002, International Monetary Fund
A flexible Tool for Model Building: the Relevant Transformation of the Inputs Network Approach (RETINA)
A new method, called relevant transformation of the inputs network approach (RETINA) is proposed as a tool for model building and selection. It is designed to improve some of the shortcomings of neural networks. It has the flexibility of neural network models, the concavity of the likelihood in the weights of the usual likelihood models, and the ability to identify a parsimonious set of attributes that are likely to be relevant for predicting out of sample outcomes. RETINA expands the range of models by considering transformations of the original inputs; splits the sample in three disjoint subsamples, sorts the candidate regressors by a saliency feature, chooses the models in subsample 1, uses subsample 2 for parameter estimation and subsample 3 for cross-validation. It is modular, can be used as a data exploratory tool and is computationally feasible in personal computers. In tests on simulated data, it achieves high rates of successes when the sample size or the R2 are large enough. As our experiments show, it is superior to alternative procedures such as the non negative garrote and forward and backward stepwise regression.
The impact of the use of forecasts in information sets
We analyze the properties of multiperiod forecasts which are formulated by a number of companies for a fixed horizon ahead which moves each month one period closer and are collected and diffused each month by some polling agency. Some descriptive evidence and a formal model suggest that knowing the viewsexpressed by other forecasters the previous period is influencing individual current forecasts in the form of an attraction to conform to the mean forecast. There are two implications: one is that the forecasts polled in a multiperiod framework cannot be seen as independent from one another and hence the practice of using standard deviations from the forecasts' distribution as if they were standard errors of the estimated mean is not warranted. The second is that the forecasting performance of these groups may be severely affected by the detected imitation behavior and lead to convergence to a value which is not the right target (either the first available figure or some final values available at a later time). --multistep forecast,consensus forecast,preliminary data
Semiparametric vector MEM
In financial time series analysis we encounter several instances of nonânegative
valued processes (volumes, trades, durations, realized volatility, daily range, and so on) which exhibit clustering and can be modeled as the product of a vector of
conditionally autoregressive scale factors and a multivariate iid innovation process
(vector Multiplicative Error Model).
Two novel points are introduced in this paper relative to previous suggestions: a
more general specification which sets this vector MEM apart from an equation by
equation specification; and the adoption of a GMM-based approach which bypasses the complicated issue of specifying a general multivariate nonânegative valued innovation
process. A vMEM for volumes, number of trades and realized volatility reveals empirical support for a dynamically interdependent pattern of relationships among the variables on a number of NYSE stocks
A Multiple Indicators Model For Volatility Using Intra-Daily Data
Many ways exist to measure and model financial asset volatility. In principle, as the frequency of the data increases, the quality of forecasts should improve. Yet, there is no consensus about a âtrueâ or "best" measure of volatility. In this paper we propose to jointly consider absolute daily returns, daily high-low range and daily realized volatility to develop a forecasting model based on their conditional dynamics. As all are non-negative series, we develop a multiplicative error model that is consistent and asymptotically normal under a wide range of specifications for the error density function. The estimation results show significant interactions between the indicators. We also show that one-month-ahead forecasts match well (both in and out of sample) the market-based volatility measure provided by an average of implied volatilities of index options as measured by VIX
- âŚ