10 research outputs found

    Examining the Nelson-Siegel Class of Term Structure Models

    Get PDF
    In this paper I examine various extensions of the Nelson and Siegel (1987) model with the purpose of fitting and forecasting the term structure of interest rates. As expected, I find that using more flexible models leads to a better in-sample fit of the term structure. However, I show that the out-of-sample predictability improves as well. The four-factor model, which adds a second slope factor to the three-factor Nelson-Siegel model, forecasts particularly well. Especially with a one-step state-space estimation approach the four-factor model produces accurate forecasts and outperforms competitor models across maturities and forecast horizons. Subsample analysis shows that this outperformance is also consistent over time

    Modeling and Forecasting Stock Return Volatility and the Term Structure of Interest Rates

    Get PDF
    This dissertation consists of a collection of studies on two topics: stock return volatility and the term structure of interest rates. _Part A_ consists of three studies and contributes to the literature that focuses on the modeling and forecasting of financial market volatility. In this part we first of all discuss how to apply CUSUM tests to identify structural changes in the level of volatility. The main focus of part A is, however, on the use of high-frequency intraday return data to measure the volatility of individual asset eturns as well as the correlations between asset returns. A nonlinear long-memory model for realized volatility is developed which is shown to accurately forecast future volatility. Furthermore, we show that daily covariance matrix estimates based on intraday return data are of economic significance to an investor. We investigate what the optimal intraday sampling frequency is for constructing estimates of the daily covariance matrix and we find that the optimal frequency is substantially lower than the commonly used 5-minute frequency. _Part B_ consists of two studies and investigates the modeling and forecasting of the term structure of interest rates. In the first study we examine the class of Nelson-Siegel models for their in-sample fit and out-of-sample forecasting performance. We show that a four-factor model has a good performance in both areas. In the second study we analyze the forecasting performance of a panel of term structure models. We show that the performance varies substantially across models and subperiods. To mitigate model uncertainty we therefore analyze forecast combination techniques and we find that combined forecasts are consistently accurate over time

    Testing for changes in volatility in heteroskedastic time series - a further examination

    Get PDF
    We consider tests for sudden changes in the unconditional volatility of conditionally heteroskedastic time series based on cumulative sums of squares. When applied to the original series these tests suffer from severe size distortions, where the correct null hypothesis of no volatility change is rejected much too frequently. Applying the tests to standardized residuals from an estimated GARCH model results in good size and reasonable power properties when testing for a single break in the variance. The tests also appear to be robust to different types of misspecification. An iterative algorithm is designed to test sequentially for the presence of multiple changes in volatility. An application to emerging markets stock returns clearly illustrates the properties of the different test statistics

    Predicting the Term Structure of Interest Rates: Incorporating parameter uncertainty, model uncertainty and macroeconomic information

    Get PDF
    We forecast the term structure of U.S. Treasury zero-coupon bond yields by analyzing a range of models that have been used in the literature. We assess the relevance of parameter uncertainty by examining the added value of using Bayesian inference compared to frequentist estimation techniques, and model uncertainty by combining forecasts from individual models. Following current literature we also investigate the benefits of incorporating macroeconomic information in yield curve models. Our results show that adding macroeconomic factors is very beneficial for improving the out-of-sample forecasting performance of individual models. Despite this, the predictive accuracy of models varies over time considerably, irrespective of using the Bayesian or frequentist approach. We show that mitigating model uncertainty by combining forecasts leads to substantial gains in forecasting performance, especially when applying Bayesian model averaging

    Modeling and Forecasting S&P 500 Volatility: Long Memory, Structural Breaks and Nonlinearity

    Get PDF
    The sum of squared intraday returns provides an unbiased and almost error-free measure of ex-post volatility. In this paper we develop a nonlinear Autoregressive Fractionally Integrated Moving Average (ARFIMA) model for realized volatility, which accommodates level shifts, day-of-the-week effects, leverage effects and volatility level effects. Applying the model to realized volatilities of the S&P 500 stock index and three exchange rates produces forecasts that clearly improve upon the ones obtained from a linear ARFIMA model and from conventional time-series models based on daily returns, treating volatility as a latent variable

    Predicting the Daily Covariance Matrix for S&P 100 Stocks Using Intraday Data - But Which Frequency To Use?

    Get PDF
    This paper investigates the merits of high-frequency intraday data when forming minimum variance portfolios and minimum tracking error portfolios with daily rebalancing from the individual constituents of the S&P 100 index. We focus on the issue of determining the optimal sampling frequency, which strikes a balance between variance and bias in covariance matrix estimates due to market microstructure effects such as non-synchronous trading and bid-ask bounce. The optimal sampling frequency typically ranges between 30- and 65-minutes, considerably lower than the popular five-minute frequency. We also examine how bias-correction procedures, based on the addition of leads and lags and on scaling, and a variance-reduction technique, based on subsampling, affect the performance

    Gibbs sampling in econometric practice

    Get PDF
    We present a road map for effective application of Bayesian analysis of a class of well-known dynamic econometric models by means of the Gibbs sampling algorithm. Members belonging to this class are the Cochrane-Orcutt model for serial correlation, the Koyck distributed lag model, the Unit Root model and as Hierarchical Linear Mixed Models, the State-Space model and the Panel Data model. We discuss issues involved when drawing Bayesian inference on equation parameters and variance components and show that one should carefully scan the shape of the criterion function for irregularities before applying the Gibbs sampler. Analytical, graphical and empirical results are used along the way

    On the Practice of Bayesian Inference in Basic Economic Time Series Models using Gibbs Sampling

    Get PDF
    Several lessons learned from a Bayesian analysis of basic economic time series models by means of the Gibbs sampling algorithm are presented. Models include the Cochrane-Orcutt model for serial correlation, the Koyck distributed lag model, the Unit Root model, the Instrumental Variables model and as Hierarchical Linear Mixed Models, the State-Space model and the Panel Data model. We discuss issues involved when drawing Bayesian inference on regression parameters and variance components, in particular when some parameter have substantial posterior probability near the boundary of the parameter region, and show that one should carefully scan the shape of the posterior density function. Analytical, graphical and empirical results are used along the way

    A method to measure flag performance for the shipping industry

    Get PDF
    The subject of measuring the performance of registries has been a topic of policy discussions in recent years on the regional level due to the recast of the European Union (EU) port state control (PSC) directive which introduces incentives for flags which perform better. Since the current method used in the EU region entails some shortcomings, it has therefore been the subject of substantial scrutiny. Furthermore, the International Maritime Organization (IMO) developed a set of performance indicators which however lacks the ability to measure compliance as set out in one of its strategic directions towards fostering global compliance. In this article, we develop and test a methodology to measure flag state performance which can be applied to the regional or global level and to other areas of legislative interest (e.g. recognized organizations, Document of Compliance Companies). Our proposed methodology overcomes some of the shortcomings of the present method and presents a more refined, less biased approach of measuring performance. To demonstrate its usefulness, we apply it to a sample of 207,821 observations for a 3 year time frame and compare it to the best know current method in the industry

    Bayesian near-boundary analysis in basic macroeconomic time series models

    Get PDF
    Several lessons learnt from a Bayesian analysis of basic macroeconomic time series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented
    corecore