987 research outputs found

    Essays in Time Series Econometrics

    Get PDF
    In economics, predicting the future state of the economy is a key issue for the decision making of economic agents and policy makers alike. Therefore, econometric methods to accurately measure the state of our economy as well as to predict and quantify the future distributions of key macroeconomic variables play an important role in the toolkit of national and international policy institutions and statistical agencies. This dissertation aims to apply and develop new time series methods to determine the current state of the economy in real time as well as to predict the future path of the economy together with the associated risks in form of density forecasts. The first chapter, which is joint work with Yves Schüler and Frieder Mokinski, shows that one should not use the one-sided Hodrick-Prescott filter (HP-1s) as the real-time version of the two-sided Hodrick-Prescott filter (HP-2s): First, in terms of the extracted cyclical component, HP-1s fails to remove low-frequency fluctuations to the same extent as HP-2s. Second, HP-1s dampens fluctuations at all frequencies -- even those it is meant to extract. As a remedy, we propose two small adjustments to HP-1s, aligning its properties closely with those of HP-2s: (1) a lower value for the smoothing parameter and (2) a multiplicative rescaling of the extracted cyclical component. For example, for HP-2s with lambda =1,600 (value of smoothing parameter), the adjusted one-sided HP filter uses lambda* = 650 and rescales the extracted cyclical component by a factor of 1.1513. Using simulated and empirical data, we illustrate the relevance of these adjustments. For instance, financial cycles may appear to be 70% more volatile than business cycles, where in fact volatilities differ only marginally. The second chapter is joint work with Till Strohsal. We show that revisions to German national accounts are biased, large and predictable. Moreover, using filtering techniques designed to process data subject to revisions, the real-time forecasting performance of initial releases can be increased by up to 23%. For total real GDP growth, however, the initial release is an optimal forecast. Yet, given the results for disaggregated variables, the averaging-out of biases and inefficiencies at the aggregate GDP level appears to be good luck rather than good forecasting. The third proposes a Skewed Stochastic Volatility (SSV) model to estimate asymmetric macroeconomic tail risks in the spirit of Adrian et. al's seminal paper "Vulnerable Growth". In contrary to their semi-parametric approach, the SSV model captures the evolution of the conditional density of future US GDP growth in a parametric, non-linear, non-Gaussian state space model. This allows to statistically test the effect of exogenous variables on the different moments of the conditional distribution and provides a law of motion to predict future values of volatility and skewness. The model is estimated using a particle MCMC algorithm. To increase estimation accuracy, I use a tempered particle filter that takes the time-varying volatility and asymmetry of the densities into account. I find that financial conditions affect the mean, variance and skewness of the conditional distribution of future US GDP growth. With a Bayes ratio of 1612.18, the SSV model is strongly favored by the data over a symmetric Stochastic Volatility (SV) model. The fourth paper is joint work with Carlos Montes-Galdón and proposes a new and robust methodology to obtain conditional density forecasts, based on information not contained in an initial econometric model. The methodology allows to condition on expected marginal densities for a selection of variables in the model, rather than just on future paths as it is usually done in the conditional forecasting literature. The proposed algorithm, which is based on tempered importance sampling, adapts the model-based density forecasts to target distributions the researcher has access to. As an example, this paper shows how to implement the algorithm by conditioning the forecasting densities of a BVAR and a DSGE model on information about the marginal densities of future oil prices. The results show that increased asymmetric upside risks to oil prices result in upside risks to inflation as well as higher core-inflation over the considered forecasting horizon. Finally, a real-time forecasting exercise yields that introducing market-based information on the oil price improves inflation and GDP forecasts during crises times such as the COVID pandemic

    Posterior Predictive Analysis for Evaluating DSGE Models

    Get PDF
    In this paper, we develop and apply certain tools to evaluate the strengths and weaknesses of dynamic stochastic general equilibrium (DSGE) models. In particular, this paper makes three contributions: One, it argues the need for such tools to evaluate the usefulness of the these models; two, it defines these tools which take the form of prior and particularly posterior predictive analysis and provides illustrations; and three, it provides a justification for the use of these tools in the DSGE context in defense against the standard criticisms for the use of these tools.Prior and posterior predictive analysis; DSGE Model Evaluation; Monetary Policy.

    Probabilistic quantile factor analysis

    Full text link
    This paper extends quantile factor analysis to a probabilistic variant that incorporates regularization and computationally efficient variational approximations. By means of synthetic and real data experiments it is established that the proposed estimator can achieve, in many cases, better accuracy than a recently proposed loss-based estimator. We contribute to the literature on measuring uncertainty by extracting new indexes of low, medium and high economic policy uncertainty, using the probabilistic quantile factor methodology. Medium and high indexes have clear contractionary effects, while the low index is benign for the economy, showing that not all manifestations of uncertainty are the same
    corecore