431 research outputs found
Recommended from our members
Forecasting in the presence of recent structural change
We examine how to forecast after a recent break. We consider monitoring for change and then combining forecasts from models that do and do not use data before the change; and robust methods, namely rolling regressions, forecast averaging over different windows and exponentially weighted moving average (EWMA) forecasting. We derive analytical results for the performance of the robust methods relative to a full-sample recursive benchmark. For a location model subject to stochastic breaks the relative MSFE ranking is EWMA < rolling regression < forecast averaging. No clear ranking emerges under deterministic breaks. In Monte Carlo experiments forecast averaging improves performance in many cases with little penalty where there are small or infrequent changes. Similar results emerge when we examine a large number of UK and US macroeconomic series
Forecasting Exchange Rates with a Large Bayesian VAR
Models based on economic theory have serious problems at forecasting exchange rates better than simple univariate driftless random walk models, especially at short horizons. Multivariate time series models suffer from the same problem. In this paper, we propose to forecast exchange rates with a large Bayesian VAR (BVAR), using a panel of 33 exchange rates vis-a-vis the US Dollar. Since exchange rates tend to co-move, the use of a large set of them can contain useful information for forecasting. In addition, we adopt a driftless random walk prior, so that cross-dynamics matter for forecasting only if there is strong evidence of them in the data. We produce forecasts for all the 33 exchange rates in the panel, and show that our model produces systematically better forecasts than a random walk for most of the countries, and at any forecast horizon, including at 1-step ahead.Exchange Rates, Forecasting, Bayesian VAR
Forecasting Government Bond Yields with Large Bayesian VARs
We propose a new approach to forecasting the term structure of interest rates, which allows to efficiently extract the information contained in a large panel of yields. In particular, we use a large Bayesian Vector Autoregression (BVAR) with an optimal amount of shrinkage towards univariate AR models. Focusing on the U.S., we provide an extensive study on the forecasting performance of our proposed model relative to most of the existing alternative speci.cations. While most of the existing evidence focuses on statistical measures of forecast accuracy, we also evaluate the performance of the alternative forecasts when used within trading schemes or as a basis for portfolio allocation. We extensively check the robustness of our results via subsample analysis and via a data based Monte Carlo simulation. We .nd that: i) our proposed BVAR approach produces forecasts systematically more accurate than the random walk forecasts, though the gains are small; ii) some models beat the BVAR for a few selected maturities and forecast horizons, but they perform much worse than the BVAR in the remaining cases; iii) predictive gains with respect to the random walk have decreased over time; iv) di¤erent loss functions (i.e., "statistical" vs "economic") lead to di¤erent ranking of speci.c models; v) modelling time variation in term premia is important and useful for forecasting.Bayesian methods, Forecasting, Term Structure.
A Generalised Fractional Differencing Bootstrap for Long Memory Processes
A bootstrap methodology, first proposed in a restricted form by Kapetanios and Papailias (2011), suitable for use with stationary and nonstationary fractionally integrated time series is further developed in this paper. The resampling algorithm involves estimating the degree of fractional integration, applying the fractional differencing operator, resampling the resulting approximation to the underlying short memory series and, finally, cumulating to obtain a resample of the original fractionally integrated process. While a similar approach based on differencing has been independently proposed in the literature for stationary fractionally integrated processes using the sieve bootstrap by Poskitt, Grose and Martin (2015), we extend it to allow for general bootstrap schemes including blockwise bootstraps. Further, we show that it can also be validly used for nonstationary fractionally integrated processes. We establish asymptotic validity results for the general method and provide simulation evidence which highlights a number of favourable aspects of its finite sample performance, relative to other commonly used bootstrap methods
Recommended from our members
A factor approach to realized volatility forecasting in the presence of finite jumps and cross-sectional correlation in pricing errors
There is a growing literature on the realized volatility (View the MathML source) forecasting of asset returns using high-frequency data. We explore the possibility of forecasting View the MathML source with factor analysis; once considering the significant jumps. A real high-frequency financial data application suggests that the factor based approach is of significant potential interest and novelty
A stochastic variance factor model for large datasets and an application to S&P data
The aim of this paper is to consider multivariate stochastic volatility models for large dimensional datasets. For this purpose we use a common factor approach along the lines of Harvey, Ruiz, and Shephard (1994). More recently, Bayesian estimation methods, relying on Markov
Chain Monte Carlo, have been put forward by Chib, Nardari, and Shephard (2006) to estimate relatively large multivariate stochastic volatility models. However, computational constraints can be binding when dealing with very large datasets such as, e.g., S&P 500 constituents. For instance, the Bayesian modelling approach put forward by Chib, Nardari, and Shephard (2006) is illustrated by modelling a dataset of only 20 series of stock returns.
Recently, Stock and Watson (2002) have shown that principal component estimates of the common factor underlying large datasets can be used successfully in forecasting conditional
means. We propose the use of principal component estimation for the volatility processes of large datasets. A Monte Carlo study and an application to the modelling of the volatilities of the S&P constituents illustrate the usefulness of our approach
Forecasting financial crises and contagion in Asia using dynamic factor analysis
In this paper we use principal components analysis to obtain vulnerability indicators able to predict financial turmoil. Probit modelling through principal components
and also stochastic simulation of a Dynamic Factor model are used to produce the corresponding probability forecasts regarding the currency crisis events affecting a number
of East Asian countries during the 1997-1998 period. The principal components model
improves upon a number of competing models, in terms of out-of-sample forecasting
performanc
A UK financial conditions index using targeted data reduction: forecasting and structural identification
A financial conditions index(FCI)is designed to summarise the state of financial markets. We construct two with UK data. The first is the first principal component(PC)of a set of financial indicators. The second comes from a new approach taking information from a large set of macroeconomic variables weighted by the joint covariance with a subset of the financial indicators (a set of spreads), using multivariate partial least squares, again using the first factor. The resulting FCIs are broadly similar. They both have some forecasting power for monthly GDP in a quasi-real-time recursive evaluation from 2011-2014 and outperform an FCI produced by Goldman Sachs. A second factor, that may be interpreted as a monetary conditions index, adds further forecast power, while third factors have a mixed effect on performance. The FCIs are used to improve identification of credit supply shocks in an SVAR. The main effects relative to an SVAR excluding an FCI of the (adverse) credit shock IRFs are to make the positive impact on inflation more precise and to reveal an increased positive impact on spreads
Hierarchical Time-Varying Estimation of Asset Pricing Models
This paper presents a new hierarchical methodology for estimating multi factor dynamic asset pricing models. The approach is loosely based on the sequential Fama–MacBeth approach and developed in a kernel regression framework. However, the methodology uses a very flexible bandwidth selection method which is able to emphasize recent data and information to derive the most appropriate estimates of risk premia and factor loadings at each point in time. The choice of bandwidths and weighting schemes are achieved by a cross-validation procedure; this leads to consistent estimators of the risk premia and factor loadings. Additionally, an out-of-sample forecasting exercise indicates that the hierarchical method leads to a statistically significant improvement in forecast loss function measures, independently of the type of factor considered
Recommended from our members
Forecasting using Bayesian and information theoretic model averaging: an application to UK inflation
In recent years there has been increasing interest in forecasting methods that utilise large datasets, driven partly by the recognition that policymaking institutions need to process large quantities of information. Factor analysis is one popular way of doing this. Forecast combination is another, and it is on this that we concentrate. Bayesian model averaging methods have been widely advocated in this area, but a neglected frequentist approach is to use information theoretic based weights. We consider the use of model averaging in forecasting UK inflation with a large dataset from this perspective. We find that an information theoretic model averaging scheme can be a powerful alternative both to the more widely used Bayesian model averaging scheme and to factor model
- …