368 research outputs found

    Forecasting Government Bond Yields with Large Bayesian VARs

    Get PDF
    We propose a new approach to forecasting the term structure of interest rates, which allows to efficiently extract the information contained in a large panel of yields. In particular, we use a large Bayesian Vector Autoregression (BVAR) with an optimal amount of shrinkage towards univariate AR models. Focusing on the U.S., we provide an extensive study on the forecasting performance of our proposed model relative to most of the existing alternative speci.cations. While most of the existing evidence focuses on statistical measures of forecast accuracy, we also evaluate the performance of the alternative forecasts when used within trading schemes or as a basis for portfolio allocation. We extensively check the robustness of our results via subsample analysis and via a data based Monte Carlo simulation. We .nd that: i) our proposed BVAR approach produces forecasts systematically more accurate than the random walk forecasts, though the gains are small; ii) some models beat the BVAR for a few selected maturities and forecast horizons, but they perform much worse than the BVAR in the remaining cases; iii) predictive gains with respect to the random walk have decreased over time; iv) di¤erent loss functions (i.e., "statistical" vs "economic") lead to di¤erent ranking of speci.c models; v) modelling time variation in term premia is important and useful for forecasting.Bayesian methods, Forecasting, Term Structure.

    A Generalised Fractional Differencing Bootstrap for Long Memory Processes

    Get PDF
    A bootstrap methodology, first proposed in a restricted form by Kapetanios and Papailias (2011), suitable for use with stationary and nonstationary fractionally integrated time series is further developed in this paper. The resampling algorithm involves estimating the degree of fractional integration, applying the fractional differencing operator, resampling the resulting approximation to the underlying short memory series and, finally, cumulating to obtain a resample of the original fractionally integrated process. While a similar approach based on differencing has been independently proposed in the literature for stationary fractionally integrated processes using the sieve bootstrap by Poskitt, Grose and Martin (2015), we extend it to allow for general bootstrap schemes including blockwise bootstraps. Further, we show that it can also be validly used for nonstationary fractionally integrated processes. We establish asymptotic validity results for the general method and provide simulation evidence which highlights a number of favourable aspects of its finite sample performance, relative to other commonly used bootstrap methods

    A stochastic variance factor model for large datasets and an application to S&P data

    Get PDF
    The aim of this paper is to consider multivariate stochastic volatility models for large dimensional datasets. For this purpose we use a common factor approach along the lines of Harvey, Ruiz, and Shephard (1994). More recently, Bayesian estimation methods, relying on Markov Chain Monte Carlo, have been put forward by Chib, Nardari, and Shephard (2006) to estimate relatively large multivariate stochastic volatility models. However, computational constraints can be binding when dealing with very large datasets such as, e.g., S&P 500 constituents. For instance, the Bayesian modelling approach put forward by Chib, Nardari, and Shephard (2006) is illustrated by modelling a dataset of only 20 series of stock returns. Recently, Stock and Watson (2002) have shown that principal component estimates of the common factor underlying large datasets can be used successfully in forecasting conditional means. We propose the use of principal component estimation for the volatility processes of large datasets. A Monte Carlo study and an application to the modelling of the volatilities of the S&P constituents illustrate the usefulness of our approach

    Forecasting financial crises and contagion in Asia using dynamic factor analysis

    Get PDF
    In this paper we use principal components analysis to obtain vulnerability indicators able to predict financial turmoil. Probit modelling through principal components and also stochastic simulation of a Dynamic Factor model are used to produce the corresponding probability forecasts regarding the currency crisis events affecting a number of East Asian countries during the 1997-1998 period. The principal components model improves upon a number of competing models, in terms of out-of-sample forecasting performanc

    A UK financial conditions index using targeted data reduction: forecasting and structural identification

    Get PDF
    A financial conditions index(FCI)is designed to summarise the state of financial markets. We construct two with UK data. The first is the first principal component(PC)of a set of financial indicators. The second comes from a new approach taking information from a large set of macroeconomic variables weighted by the joint covariance with a subset of the financial indicators (a set of spreads), using multivariate partial least squares, again using the first factor. The resulting FCIs are broadly similar. They both have some forecasting power for monthly GDP in a quasi-real-time recursive evaluation from 2011-2014 and outperform an FCI produced by Goldman Sachs. A second factor, that may be interpreted as a monetary conditions index, adds further forecast power, while third factors have a mixed effect on performance. The FCIs are used to improve identification of credit supply shocks in an SVAR. The main effects relative to an SVAR excluding an FCI of the (adverse) credit shock IRFs are to make the positive impact on inflation more precise and to reveal an increased positive impact on spreads

    Hierarchical Time-Varying Estimation of Asset Pricing Models

    Get PDF
    This paper presents a new hierarchical methodology for estimating multi factor dynamic asset pricing models. The approach is loosely based on the sequential Fama–MacBeth approach and developed in a kernel regression framework. However, the methodology uses a very flexible bandwidth selection method which is able to emphasize recent data and information to derive the most appropriate estimates of risk premia and factor loadings at each point in time. The choice of bandwidths and weighting schemes are achieved by a cross-validation procedure; this leads to consistent estimators of the risk premia and factor loadings. Additionally, an out-of-sample forecasting exercise indicates that the hierarchical method leads to a statistically significant improvement in forecast loss function measures, independently of the type of factor considered
    corecore