528 research outputs found

    An Out of Sample Test for Granger Causality

    Get PDF
    Granger (1980) summarizes his personal viewpoint on testing for causality, and outlines what he considers to be a useful operational version of his original definition of causality (Granger (1969)), which he notes was partially alluded to in Wiener (1958). This operational version is based on a comparison of the 1-step ahead predictive ability of competing models. However, Granger concludes his discussion by noting that it is common practice to test for Granger causality using in-sample F-tests. The practice of using in-sample type Granger causality tests continues to be prevalent. In this paper we develop simple (nonlinear) out-of-sample predictive ability tests of the Granger non-causality null hypothesis. In addition, Monte Carlo experiments are used to investigate the finite sample properites of the test. An empirical illustration shows that the choice of in-sample versus out-of-sample Granger causality tests can crucially affect the conclusions about the predictive content of money for output.

    Real-time datasets really do make a difference: definitional change, data release, and forecasting

    Get PDF
    In this paper, the authors empirically assess the extent to which early release inefficiency and definitional change affect prediction precision. In particular, they carry out a series of ex-ante prediction experiments in order to examine: the marginal predictive content of the revision process, the trade-offs associated with predicting different releases of a variable, the importance of particular forms of definitional change, which the authors call "definitional breaks," and the rationality of early releases of economic variables. An important feature of our rationality tests is that they are based solely on the examination of ex-ante predictions, rather than being based on in-sample regression analysis, as are many tests in the extant literature. Their findings point to the importance of making real-time datasets available to forecasters, as the revision process has marginal predictive content, and because predictive accuracy increases when multiple releases of data are used when specifying and estimating prediction models. The authors also present new evidence that early releases of money are rational, whereas prices and output are irrational. Moreover, they find that regardless of which release of our price variable one specifies as the "target" variable to be predicted, using only "first release" data in model estimation and prediction construction yields mean square forecast error (MSFE) "best" predictions. On the other hand, models estimated and implemented using "latest available release" data are MSFE-best for predicting all releases of money. The authors argue that these contradictory findings are due to the relevance of definitional breaks in the data generating processes of the variables that they examine. In an empirical analysis, they examine the real-time predictive content of money for income, and they find that vector autoregressions with money do not perform significantly worse than autoregressions, when predicting output during the last 20 years.Economic forecasting ; Econometrics

    A Randomized Procedure for Choosing Data Transformation

    Get PDF
    Standard unit root and stationarity tests (see e.g. Dickey and Fuller (1979)) assume linearity under both the null and the alternative hypothesis. Violation of this linearity assumption can result in severe size and power distortion, both in finite and large samples. Thus, it is reasonable to address the problem of data transformation before running a unit root test. In this paper we propose a simple randomized procedure, coupled with sample conditioning, for choosing between levels and log-levels specifications in the presence of deterministic and/or stochastic trends. In particular, we add a randomized component to a basic test statistic, proceed by conditioning on the sample, and show that for all samples except a set of measure zero, the statistic has a X2 limiting distribution under the null hypothesis (log linearity), while it diverges under the alternative hypothesis (level linearity). Once we have chosen the proper data transformation, we remain with the standard problem of testing for a unit root, either in levels or in logs. Monte Carlo findings suggest that the proposed test has good finite sample properties for samples of at least 300 observations. In addition, an examination of the King, Plosser, Stock and Watson (1991) data set is carried out, and evidence in favor of using logged data is provided.Deterministic trend, nonlinear transformation, nonstationarity, randomized procedure.

    Bootstrap Specification Tests with Dependent Observations and Parameter Estimation Error

    Get PDF
    This paper introduces a parametric specification test for dissusion processes which is based on a bootstrap procedure that accounts for data dependence and parameter estimation error. The proposed bootstrap procedure additionally leads to straightforward generalizations of the conditional Kolmogorov test of Andrews (1997) and the conditional mean test of Whang (2000) to the case of dependent observations. The bootstrap hinges on a twofold extension of the Politis and Romano (1994) stationary bootstrap. First we provide an empirical process version of this bootstrap, and second, we account for parameter estimation error. One important feature of this new bootstrap is that one need not specify the conditional distribution given the entire history of the process when forming conditional Kolmogorov tests. Hence, the bootstrap, when used to extend Andrews (1997) conditional Kolmogorov test to the case of data dependence, allows for dynamic misspecification under both hypotheses. An example based on a version of the Cox, Ingersol and Ross square root process is outlined and related Monte Carlo experiments are carried out. These experiments suggest that the boostrap has excellent finite sample properties, even for samples as small as 500 observations when tests are formed using critical values constructed with as few as 100 bootstrap replications. .Diffusion process, parameter estimation error, specification test, stationary bootstrap.

    Predictive density construction and accuracy testing with multiple possibly misspecified diffusion models

    Get PDF
    This paper develops tests for comparing the accuracy of predictive densities derived from (possibly misspecified) diffusion models. In particular, the authors first outline a simple simulation-based framework for constructing predictive densities for one-factor and stochastic volatility models. Then, they construct accuracy assessment tests that are in the spirit of Diebold and Mariano (1995) and White (2000). In order to establish the asymptotic properties of their tests, the authors also develop a recursive variant of the nonparametric simulated maximum likelihood estimator of Fermanian and Salanié (2004). In an empirical illustration, the predictive densities from several models of the one-month federal funds rates are compared.Econometric models - Evaluation ; Stochastic analysis

    International Evidence on the Efficacy of new-Keynesian Models of Inflation Persistence

    Get PDF
    In this paper we take an agnostic view of the Phillips curve debate, and carry out an empirical investigation of the relative and absolute efficacy of Calvo sticky price (SP), sticky information (SI), and sticky price with indexation models (SPI), with emphasis on their ability to mimic inflationary dynamics. In particular, we look at evidence for a group of 13 OECD countries, and we consider three alternative measures of inflationary pressure, including the output gap, labor share, and unemployment. We find that the Calvo SP and the SI models essentially perform no better than a strawman constant inflation model, when used to explain inflation persistence. Indeed, virtually all inflationary dynamics end up being captured by the residuals of the estimated versions of these models. We find that SPI model is preferable because it captures the type of strong inflationary persistence that has in the past characterized the economies of the countries in our sample. However, two caveats to this conclusion are that improvement in performance is driven mostly by the time series part of the model (i.e. lagged inflation) and that the SPI model overemphasizes inflationary persistence. Thus, there appears to be room for improvement via either modified versions of the above models, or via development of new models, that better "track" inflation persistence.sticky price, sticky information, empirical distribution, model selection

    Let's Get "Real" about Using Economic Data

    Get PDF
    We show that using data which are properly available in real time when assessing the sensitivity of asset prices to economic news leads to different empirical findings that when data availability and timing issues are ignored. We do this by focusing on a particular example, namely Chen, Roll and Ross (1986), and examine whether innovations to economic variables can be viewed as risks that are rewarded in asset markets. Our findings support the view that data uncertainty is sufficiently prevalent to warrant careful use of real-time data when forming real-time news measures, and in general when undertaking empirical financial investigations involving macroeconomic data. Nous démontrons que l'utilisation de données qui sont disponibles en temps réel pour établir la sensibilité des prix d'actifs aux nouvelles économiques mène à des résultats empiriques différents de ceux obtenus lorsque la disponibilité des données et les considérations temporelles ne sont pas prises en compte. Pour ce faire, nous nous concentrons sur un exemple en particulier, c'est-à-dire Chen, Roll et Ross (1986), et nous regardons si les innovations aux variables économiques peuvent être perçues comme étant des risques qui sont récompensés dans les marchés des actifs. Nos résultats entérinent la présomption que l'incertitude des données est suffisamment prévalente pour assurer une utilisation prudente des données en temps réel lors de l'établissement de mesures de nouvelles en temps réel, et en général lorsqu'on entreprend des enquêtes financières empiriques impliquant des données macroéconomiques.Market efficiency, expectations, news, data revision process, Efficacité des marchés, attentes, nouvelles, processus de révision des données

    Let's Get "Real" about Using Economic Data.

    Get PDF
    We show that using data which are properly available in real time when assessing the sensitivity of asset prices to economic news leads to different empirical findings than when data availability and timing issues are ignored. We do this by focusing on a particular example, namely Chen, Roll and Ross (1986), and examine whether innovations to economic variables can be viewed as risks that are rewarded in asset markets. Our findings support the view that data uncertainty is sufficiently prevalent to warrant careful use of real-time data when forming real-time news measures, and in general when undertaking empirical financial investigations involving macroeconomic data.

    Data Transformation and Forecasting in Models with Unit Roots and Cointegration

    Get PDF
    We perform a series of Monte Carlo experiments in order to evaluate the impact of data transformation on forecasting models, and find that vector error-corrections dominate differenced data vector autoregressions when the correct data transformation is used, but not when data are incorrectly tansformed, even if the true model contains cointegrating restrictions. We argue that one reason for this is the failure of standard unit root and cointegration tests under incorrect data transformation.Integratedness, Cointegratedness, Nonlinear transformation

    Predictive density accuracy tests

    Get PDF
    This paper outlines a testing procedure for assessing the relative out-of-sample predictive accuracy of multiple conditional distribution models, and surveys existing related methods in the area of predictive density evaluation, including methods based on the probability integral transform and the Kullback-Leibler Information Criterion. The procedure is closely related to Andrews’ (1997) conditional Kolmogorov test and to White’s (2000) reality check approach, and involves comparing square (approximation) errors associated with models i, i = 1, ..., n, by constructing weighted averages over U of E !" Fi(u|Zt, !†i ) − F0(u|Zt, !0) #2 $ , where F0(·|·) and Fi(·|·) are true and approximate distributions, u # U, and U is a possibly unbounded set on the real line. Appropriate bootstrap procedures for obtaining critical values for tests constructed using this measure of loss in conjunction with predictions obtained via rolling and recursive estimation schemes are developed. We then apply these bootstrap procedures to the case of obtaining critical values for our predictive accuracy test. A Monte Carlo experiment comparing our bootstrap methods with methods that do not include location bias adjustment terms is provided, and results indicate coverage improvement when our proposed bootstrap procedures are used. Finally, an empirical example comparing alternative predictive densities for U.S. inflation is given
    corecore