94 research outputs found
Priors, posteriors and Bayes factors for a Bayesian analysis of cointegration
Cointegration occurs when the long run multiplier of a vector autoregressive model exhibits rank reduction. Priors and posteriors of the parameters of the cointegration model are therefore proportional to priors and posteriors of the long run multiplier given that it has reduced rank. Rank reduction of the long run multiplier is modelled using a decomposition resulting from its singular value decomposition. It specifies the long run multiplier matrix as the sum of a matrix that equals the product of the adjustment parameters and the cointegrating vectors, i.e. the cointegration specification, and a matrix that models the deviation from cointegration. Priors and posteriors for the parameters of the cointegration model are obtained by restricting the latter matrix to zero in the prior and posterior of the unrestricted long run multiplier. The special decomposition of the long run multiplier results in unique posterior densities. This theory leads to a complete Bayesian framework for cointegration analysis. It includes prior specification, simulation schemes for obtaining posterior distributions and determination of the cointegration rank via Bayes factors. We illustrate the analysis with several simulated series, the UK data of Hendry and Doornik (1994) and the Danish data of Johansen and Juselius (1990)
Modeling the impact of forecast-based regime switches on macroeconomic time series
Forecasts of key macroeconomic variables may lead to policy changes of governments, central banks and other economic agents. Policy changes in turn lead to structural changes in macroeconomic time series models. To describe this phenomenon we introduce a logistic smooth transition autoregressive model where the regime switches depend on the forecast of the time series of interest. This forecast can either be an exogenous expert forecast or an endogenous forecast generated by the model. Results of an application of the model to US inflation shows that (i) forecasts lead to regime changes and have an impact on the level of inflation; (ii) a relatively large forecast results in actions which in the end lower the inflation rate; (iii) a counterfactual scenario where forecasts during the oil crises in the 1970s are assumed to be correct leads to lower inflation than observed
Real-time inflation forecasting in a changing world
This paper revisits inflation forecasting using reduced form Phillips curve forecasts, i.e., inflation forecasts using activity and expectations variables. We propose a Phillips curve-type model that results from averaging across different regression specifications selected from a set of potential predictors. The set of predictors includes lagged values of inflation, a host of real activity data, term structure data, nominal data and surveys. In each of the individual specifications we allow for stochastic breaks in regression parameters, where the breaks are described as occasional shocks of random magnitude.
As such, our framework simultaneously addresses structural change and model certainty that unavoidably affects Phillips curve forecasts. We use this framework to describe PCE deflator and GDP deflator inflation rates for the United States across the post-WWII period. Over the full
1960-2008 sample the framework indicates several structural breaks across different combinations of activity measures. These breaks often coincide with, amongst others, policy regime changes and oil price shocks. In contrast to many previous studies, we find less evidence for autonomous variance breaks and inflation gap persistence. Through a \\textit{real-time} out-of-sample forecasting exercise we show that our model specification generally provides superior one-quarter and one-year ahead forecasts for quarterly inflation relative to a whole range of forecasting models that are typically used in the literature
Modeling category-level purchase timing with brand-level marketing variables
Purchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from the marketing mix of individual brands. In this paper we discuss two standard approaches suggested in the literature to solve this problem, that is, using individual choice shares as weights to average the marketing mix, and the inclusive value approach. Additionally, we propose three alternative novel solutions, which have less limitations than the two standard approaches. The new approaches use brand preferences following from a brand choice model to capture the relevance of the marketing mix of individual brands. One of these approaches integrates the purchase timing model with a brand preference model.
To empirically compare the two standard and the three new approaches, we consider household scanner data in three product categories. One of the main conclusions is that the inclusive value approach performs worse than the other approaches. This holds in-sample as well as out-of-sample. The performance of the individual choice share approach is best unless one allows for unobserved heterogeneity in the brand choice models, in which case the three new approaches based on modeled brand preferences are superior
Random-Coefficient periodic autoregression
We propose a new periodic autoregressive model for seasonally observed time
series, where the number of seasons can potentially be very large. The main
novelty is that we collect the periodic parameters in a second-level stochastic
model. This leads to a random-coefficient periodic autoregression with a
substantial reduction in the number of parameters to be estimated. We discuss
representation, estimation, and inference. An illustration for monthly growth
rates of US industrial production shows the merits of the new model
specification
Censored latent effects autoregression, with an application to US unemployment
A new time series model is proposed to describe observed asymmetries in postwar unemployment data. We assume that recession periods, when unemployment increases rapidly, are caused by unobserved positive shocks. The generating mechanism of these latent shocks is a censored regression model, where linear combinations of lagged explanatory variables lead to positive shocks, while otherwise shocks are equal to zero. We apply our censored latent effects autoregression [CLEAR] to monthly US unemployment, where the positive shocks are found to depend on lagged oil prices, industrial production, the term structure of interest rates and a stock market index. The model fits the data well, and its out-of-sample forecasts appear to outperform those from alternative models
Bayes estimates of Markov trends in possibly cointegrated series: an application to US consumption and income
Stylized facts show that average growth rates of US per capita consumption and income differ in recession and expansion periods. Since a linear combination of such series does not have to be a constant mean process, standard cointegration analysis between the variables to examine the permanent income hypothesis may not be valid. To model the changing growth rates in both series, we introduce a multivariate Markov trend model, which accounts for different growth rates in consumption and income during expansions and recessions and across variables within both regimes. The deviations from the multivariate Markov trend are modeled by a vector autoregressive model. Bayes estimates of this model are obtained using Markov chain Monte Carlo methods. The empirical results suggest the existence of a cointegration relation between US per capita disposable income and consumption, after correction for a multivariate Markov trend. This results is also obtained when per capita investment is added to the vector autoregression
The Bayesian Score Statistic
We propose a novel Bayesian test under a (noninformative) Jeļ¬reysā prior speciļ¬ca-
tion. We check whether the ļ¬xed scalar value of the so-called Bayesian Score Statistic
(BSS) under the null hypothesis is a plausible realization from its known and standard-
ized distribution under the alternative. Unlike highest posterior density regions the BSS
is invariant to reparameterizations. The BSS equals the posterior expectation of the
classical score statistic and it provides an exact test procedure, whereas classical tests
often rely on asymptotic results. Since the statistic is evaluated under the null hypothe-
sis it provides the Bayesian counterpart of diagnostic checking. This result extends the
similarity of classical sampling densities of maximum likelihood estimators and Bayesian
posterior distributions based on Jeļ¬reysā priors, towards score statistics. We illustrate
the BSS as a diagnostic to test for misspeciļ¬cation in linear and cointegration models
Explaining individual response using aggregated data
Empirical analysis of individual response behavior is sometimes
limited due to the lack of explanatory variables at the individual
level. In this paper we put forward a new approach to estimate the
effects of covariates on individual response, where the covariates
are unknown at the individual level but observed at some aggregated
level. This situation may, for example, occur if the response
variable is available at the household level but covariates only at
the zip-code level.
We describe the missing individual covariates by a latent variable
model which matches the sample information at the aggregate level.
Parameter estimates can be obtained using maximum likelihood or a
Bayesian approach. We illustrate the approach estimating the effects
of household characteristics on donating behavior to a Dutch
charity. Donating behavior is observed at the household level, while
the covariates are only observed at the zip-code level
- ā¦