38 research outputs found

    Examining the Nelson-Siegel Class of Term Structure Models

    Get PDF
    In this paper I examine various extensions of the Nelson and Siegel (1987) model with the purpose of fitting and forecasting the term structure of interest rates. As expected, I find that using more flexible models leads to a better in-sample fit of the term structure. However, I show that the out-of-sample predictability improves as well. The four-factor model, which adds a second slope factor to the three-factor Nelson-Siegel model, forecasts particularly well. Especially with a one-step state-space estimation approach the four-factor model produces accurate forecasts and outperforms competitor models across maturities and forecast horizons. Subsample analysis shows that this outperformance is also consistent over time

    Predicting the term structure of interest rates incorporating parameter uncertainty, model uncertainty and macroeconomic information

    Get PDF
    We forecast the term structure of U.S. Treasury zero-coupon bond yields by analyzing a range of models that have been used in the literature. We assess the relevance of parameter uncertainty by examining the added value of using Bayesian inference compared to frequentist estimation techniques, and model uncertainty by combining forecasts from individual models. Following current literature we also investigate the benefits of incorporating macroeconomic information in yield curve models. Our results show that adding macroeconomic factors is very beneficial for improving the out-of-sample forecasting performance of individual models. Despite this, the predictive accuracy of models varies over time considerably, irrespective of using the Bayesian or frequentist approach. We show that mitigating model uncertainty by combining forecasts leads to substantial gains in forecasting performance, especially when applying Bayesian model averaging.Term structure of interest rates; Nelson-Siegel model; Affine term structure model; forecast combination; Bayesian analysis

    Modeling and Forecasting Stock Return Volatility and the Term Structure of Interest Rates

    Get PDF
    This dissertation consists of a collection of studies on two topics: stock return volatility and the term structure of interest rates. _Part A_ consists of three studies and contributes to the literature that focuses on the modeling and forecasting of financial market volatility. In this part we first of all discuss how to apply CUSUM tests to identify structural changes in the level of volatility. The main focus of part A is, however, on the use of high-frequency intraday return data to measure the volatility of individual asset eturns as well as the correlations between asset returns. A nonlinear long-memory model for realized volatility is developed which is shown to accurately forecast future volatility. Furthermore, we show that daily covariance matrix estimates based on intraday return data are of economic significance to an investor. We investigate what the optimal intraday sampling frequency is for constructing estimates of the daily covariance matrix and we find that the optimal frequency is substantially lower than the commonly used 5-minute frequency. _Part B_ consists of two studies and investigates the modeling and forecasting of the term structure of interest rates. In the first study we examine the class of Nelson-Siegel models for their in-sample fit and out-of-sample forecasting performance. We show that a four-factor model has a good performance in both areas. In the second study we analyze the forecasting performance of a panel of term structure models. We show that the performance varies substantially across models and subperiods. To mitigate model uncertainty we therefore analyze forecast combination techniques and we find that combined forecasts are consistently accurate over time

    Testing for changes in volatility in heteroskedastic time series - a further examination

    Get PDF
    We consider tests for sudden changes in the unconditional volatility of conditionally heteroskedastic time series based on cumulative sums of squares. When applied to the original series these tests suffer from severe size distortions, where the correct null hypothesis of no volatility change is rejected much too frequently. Applying the tests to standardized residuals from an estimated GARCH model results in good size and reasonable power properties when testing for a single break in the variance. The tests also appear to be robust to different types of misspecification. An iterative algorithm is designed to test sequentially for the presence of multiple changes in volatility. An application to emerging markets stock returns clearly illustrates the properties of the different test statistics

    Predicting the Term Structure of Interest Rates: Incorporating parameter uncertainty, model uncertainty and macroeconomic information

    Get PDF
    We forecast the term structure of U.S. Treasury zero-coupon bond yields by analyzing a range of models that have been used in the literature. We assess the relevance of parameter uncertainty by examining the added value of using Bayesian inference compared to frequentist estimation techniques, and model uncertainty by combining forecasts from individual models. Following current literature we also investigate the benefits of incorporating macroeconomic information in yield curve models. Our results show that adding macroeconomic factors is very beneficial for improving the out-of-sample forecasting performance of individual models. Despite this, the predictive accuracy of models varies over time considerably, irrespective of using the Bayesian or frequentist approach. We show that mitigating model uncertainty by combining forecasts leads to substantial gains in forecasting performance, especially when applying Bayesian model averaging

    Predicting the Daily Covariance Matrix for S&P 100 Stocks Using Intraday Data - But Which Frequency To Use?

    Get PDF
    This paper investigates the merits of high-frequency intraday data when forming minimum variance portfolios and minimum tracking error portfolios with daily rebalancing from the individual constituents of the S&P 100 index. We focus on the issue of determining the optimal sampling frequency, which strikes a balance between variance and bias in covariance matrix estimates due to market microstructure effects such as non-synchronous trading and bid-ask bounce. The optimal sampling frequency typically ranges between 30- and 65-minutes, considerably lower than the popular five-minute frequency. We also examine how bias-correction procedures, based on the addition of leads and lags and on scaling, and a variance-reduction technique, based on subsampling, affect the performance

    Gibbs sampling in econometric practice

    Get PDF
    We present a road map for effective application of Bayesian analysis of a class of well-known dynamic econometric models by means of the Gibbs sampling algorithm. Members belonging to this class are the Cochrane-Orcutt model for serial correlation, the Koyck distributed lag model, the Unit Root model and as Hierarchical Linear Mixed Models, the State-Space model and the Panel Data model. We discuss issues involved when drawing Bayesian inference on equation parameters and variance components and show that one should carefully scan the shape of the criterion function for irregularities before applying the Gibbs sampler. Analytical, graphical and empirical results are used along the way

    Predicting the term structure of interest rates incorporating parameter uncertainty, model uncertainty and macroeconomic information

    Get PDF
    We forecast the term structure of U.S. Treasury zero-coupon bond yields by analyzing a range of models that have been used in the literature. We assess the relevance of parameter uncertainty by examining the added value of using Bayesian inference compared to frequentist estimation techniques, and model uncertainty by combining forecasts from individual models. Following current literature we also investigate the benefits of incorporating macroeconomic information in yield curve models. Our results show that adding macroeconomic factors is very beneficial for improving the out-of-sample forecasting performance of individual models. Despite this, the predictive accuracy of models varies over time considerably, irrespective of using the Bayesian or frequentist approach. We show that mitigating model uncertainty by combining forecasts leads to substantial gains in forecasting performance, especially when applying Bayesian model averaging

    The Liquidity Effects of Official Bond Market Intervention

    Get PDF

    Bayesian near-boundary analysis in basic macroeconomic time series models

    Get PDF
    Several lessons learnt from a Bayesian analysis of basic macroeconomic time series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented
    corecore