26 research outputs found

    Forecasting US real house price returns over 1831–2013 : evidence from copula models

    Get PDF
    Given the existence of non-normality and nonlinearity in the data generating process of real house price returns over the period of 1831-2013, this paper compares the ability of various univariate copula models, relative to standard benchmarks (naive and autoregressive models) in forecasting real US house price over the annual out-of-sample period of 1859-2013, based on an in-sample of 1831-1873. Overall, our results provide overwhelming evidence in favor of the copula models (Normal, Student’s t, Clayton, Frank, Gumbel, Joe and Ali-Mikhail-Huq) relative to linear benchmarks, and especially for the Student’s t copula, which outperforms all other models both in terms of in-sample and out-of-sample predictability results. Our results highlight the importance of accounting for non-normality and nonlinearity in the data generating process of real house price returns for the US economy for nearly two centuries of data.http://www.tandfonline.com/loi/raec202017-04-30hb201

    Reconsidering the welfare cost of inflation in the US : a nonparametric estimation of the nonlinear long-run money-demand equation using projection pursuit regressions

    Get PDF
    This paper, first, estimates the appropriate, log–log or semi-log, linear longrun money-demand relationship capturing the behavior US money demand over the period of 1980:Q1–2010:Q4, using the standard linear cointegration procedures found in the literature, and the corresponding nonparametric version of the same based on projection pursuit regression (PPR) methods. We then, compare the resulting welfare costs of inflation obtained from the linear and nonlinear money-demand cointegrating equations. We make the following observations: (i) the appropriate money-demand relationship for the period of 1980:Q1–2010:Q4 is captured by a semi-log function; (ii) based on the estimation of semi-log cointegrating equations, the welfare cost of inflation was found to at the most lie between 0.0131 % of GDP and 0.2186 % of GDP for inflation rates between 0 and 10 %, and; (iii) in comparison, the welfare cost of inflation obtained from the semi-log non-linear long-run money-demand function, derived using the PPR method, for 0–10 % of inflation ranges between 0.4930 and 1.9468 % of GDP. However, the standard errors associated with the welfare cost estimates obtained from PPR relative to the linear models tend to indicate that the nonlinear money demand provides more precise estimates of the welfare costs primarily for higher rates of inflation.http://link.springer.com/journal/1812015-04-30hb201

    The role of current account balance in forecasting the US equity premium : evidence from a quantile predictive regression approach

    Get PDF
    The purpose of this paper is to investigate whether the current account balance can help in forecasting the quarterly S&P500-based equity premium out-of-sample. We consider an out-of-sample period of 1970:Q3 to 2014:Q4, with a corresponding in-sample period of 1947:Q2 to 1970:Q2. We employ a quantile predictive regression model. The quantilebased approach is more informative relative to any linear model, as it investigates the ability of the current account to forecast the entire conditional distribution of the equity premium, rather than being restricted just to the conditional-mean. In addition, we employ a recursive estimation of both the conditional-mean and quantile predictive regression models over the out-of-sample period which allows for time-varying parameters in the forecast evaluation part of the sample for both these models. Our results indicate that unlike as suggested by the linear (mean-based) predictive regression model, the quantile regression model shows that the (changes in the) real current account balance contains significant outof- sample information especially when the stock market is performing poorly (below the quantile value of 0.3), but not when the market is in normal to bullish modes (quantile value above 0.3). This result seems to be intuitive in the sense that, when the markets are performing average to well, that is performing around the median and above of the conditional distribution of the equity premium, the excess returns is inherently a randomwalk and hence, no information, from a predictor (changes in the real current account balance) is necessary.http://link.springer.com/journal/110792018-02-26hb2016Economic

    Predicting stock market movements with a time-varying consumption-aggregate wealth ratio

    Get PDF
    Please read abstract in the article.http://www.elsevier.com/locate/iref2020-01-01hj2019Economic

    Forecasting Nevada gross gaming revenue and taxable sales using coincident and leading employment indexes

    Get PDF
    This article provides out-of-sample forecasts of Nevada gross gaming revenue (GGR) and taxable sales using a battery of linear and non-linear forecasting models and univariate and multivariate techniques. The linear models include vector autoregressive and vector error-correction models with and without Bayesian priors. The non-linear models include non-parametric and semi-parametric models, smooth transition autoregressive models, and artificial neural network autoregressive models. In addition to GGR and taxable sales, we employ recently constructed coincident and leading employment indexes for Nevada’s economy. We conclude that the non-linear models generally outperformhttp://link.springer.com/journal/181hb201

    Was the recent downturn in US real GDP predictable?

    Get PDF
    This article uses a small set of variables – real GDP, the inflation rate and the short-term interest rate – and a rich set of models – atheoretical (time series) and theoretical (structural), linear and nonlinear, as well as classical and Bayesian models – to consider whether we could have predicted the recent downturn of the US real GDP. Comparing the performance of the models to the benchmark random-walk model by root mean-square errors, the two structural (theoretical) models, especially the nonlinear model, perform well on average across all forecast horizons in our ex post, out-ofsample forecasts, although at specific forecast horizons certain nonlinear atheoretical models perform the best. The nonlinear theoretical model also dominates in our ex ante, out-of-sample forecast of the Great Recession, suggesting that developing forward-looking, microfounded, nonlinear, dynamic stochastic general equilibrium models of the economy may prove crucial in forecasting turning points.http://www.tandfonline.com/loi/raec202017-04-30hb2016Economic

    Forecasting aggregate retail sales : the case of South Africa

    Get PDF
    Forecasting aggregate retail sales may improve portfolio investors‟ ability to predict movements in the stock prices of the retailing chains. Therefore, this paper uses 26 (23 single and 3 combination) forecasting models to forecast South Africa‟s aggregate seasonal retail sales. We use data from 1970:01 – 2012:05, with 1987:01-2012:05 as the out-of-sample period. Unlike, the previous literature on retail sales forecasting, we not only look at a wider array of linear and nonlinear models, but also generate multi-steps-ahead forecasts using a real-time recursive estimation scheme over the out-of-sample period, to mimic better the practical scenario faced by agents making retailing decisions. In addition, we deviate from the uniform symmetric quadratic loss function typically used in forecast evaluation exercises, by considering loss functions that overweight forecast error in booms and recessions. Focusing on the single models alone, results show that their performances differ greatly across forecast horizons and for different weighting schemes, with no unique model performing the best across various scenarios. However, the combination forecasts models, especially the discounted mean-square forecast error method which weighs current information more than past, produced not only better forecasts, but were also largely unaffected by business cycles and time horizons. This result, along with the fact that individual nonlinear models performed better than linear models, led us to conclude that theoretical research on retail sales should look at developing dynamic stochastic general equilibrium models which not only incorporates learning behaviour, but also allows the behavioural parameters of the model to be state-dependent, to account for regime-switching behaviour across alternative states of the economy.http://www.elsevier.com/locate/ijpehb201

    Do terror attacks predict gold returns? Evidence from a quantile-predictive-regression approach

    Get PDF
    Much significant research has been done to study how terror attacks affect financial markets. We contribute to this research by studying whether terror attacks, in addition to standard predictors considered in earlier research, help to predict gold returns. To this end, we use a quantile-predictive-regression (QPR) approach that accounts for model uncertainty and model instability. We find that terror attacks have predictive value for the lower and especially for the upper quantiles of the conditional distribution of gold returns.http:// www.elsevier.com/locate/qref2018-08-30hj2018Economic

    Some problems in multivariate spatial and spatio-temporal modeling

    No full text
    This thesis addresses some problems in multivariate spatial and spatio-temporal modeling using a bayesian approach. The data are point referenced in a region. The thesis comprises three main parts. The first part discusses problems in spatio-temporal change-point modeling by introducing separable spatio-temporal covariance functions that change with time, thus addressing the change of various features in the model, namely, mean, variance and correlation. The second part of the thesis uses the famous Olcott Chicago land value data to examine and analyze two years of land values. It also develops distributions of gradients on the surface and uses that to study certain second order behavior of the response surfaces. The third part develops novel cross-covariance functions by convolving stationary covariance functions to model valid multivariate spatial models. An environmental dataset obtained from the California Air Resources Board is used to analyze and compare the performance of this model with the existing model of coregionalization (Wackernagel, 2003).

    Bivariate Zero-Inflated Regression for Count Data: A Bayesian Approach with Application to Plant Counts

    No full text
    Lately, bivariate zero-inflated (BZI) regression models have been used in many instances in the medical sciences to model excess zeros. Examples include the BZI Poisson (BZIP), BZI negative binomial (BZINB) models, etc. Such formulations vary in the basic modeling aspect and use the EM algorithm (Dempster, Laird and Rubin, 1977) for parameter estimation. A different modeling formulation in the Bayesian context is given by Dagne (2004). We extend the modeling to a more general setting for multivariate ZIP models for count data with excess zeros as proposed by Li, Lu, Park, Kim, Brinkley and Peterson (1999), focusing on a particular bivariate regression formulation. For the basic formulation in the case of bivariate data, we assume that Xi are (latent) independent Poisson random variables with parameters ? i, i = 0, 1, 2. A bi-variate count vector (Y1, Y2) response follows a mixture of four distributions; p0 stands for the mixing probability of a point mass distribution at (0, 0); p1, the mixing probability that Y2 = 0, while Y1 = X0 + X1; p2, the mixing probability that Y1 = 0 while Y2 = X0 + X2; and finally (1 - p0 - p1 - p2), the mixing probability that Yi = Xi + X0, i = 1, 2. The choice of the parameters {pi, ? i, i = 0, 1, 2} ensures that the marginal distributions of Yi are zero inflated Poisson (? 0 + ? i). All the parameters thus introduced are allowed to depend on co-variates through canonical link generalized linear models (McCullagh and Nelder, 1989). This flexibility allows for a range of real-life applications, especially in the medical and biological fields, where the counts are bivariate in nature (with strong association between the processes) and where there are excess of zeros in one or both processes. Our contribution in this paper is to employ a fully Bayesian approach consolidating the work of Dagne (2004) and Li et al. (1999) generalizing the modeling and sampling-based methods described by Ghosh, Mukhopadhyay and Lu (2006) to estimate the parameters and obtain posterior credible intervals both in the case where co-variates are not available as well as in the case where they are. In this context, we provide explicit data augmentation techniques that lend themselves to easier implementation of the Gibbs sampler by giving rise to well-known and closed-form posterior distributions in the bivariate ZIP case. We then use simulations to explore the effectiveness of this estimation using the Bayesian BZIP procedure, comparing the performance to the Bayesian and classical ZIP approaches. Finally, we demonstrate the methodology based on bivariate plant count data with excess zeros that was collected on plots in the Phoenix metropolitan area and compare the results with independent ZIP regression models fitted to both processes.
    corecore