999,194 research outputs found

    Inference about predictive ability

    Get PDF
    estimation;testing;forecasting

    Tests of Conditional Predictive Ability

    Get PDF
    We argue that the current framework for predictive ability testing (e.g., West, 1996) is not necessarily useful for real-time forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for out-of-sample comparison of predictive ability which delivers more practically relevant conclusions. Our approach is based on inference about conditional expectations of forecasts and forecast errors rather than the unconditional expectations that are the focus of the existing literature. Compared to previous approaches, our tests are valid under more general data assumptions (heterogeneity rather than stationarity) and estimation methods, and they can handle comparison of both nested and non-nested models, which is not currently possible.Forecast Evaluation, Asymptotic Inference, Parameter-reduction Methods

    Testing Predictive Ability and Power Robustification

    Get PDF
    One of the approaches to compare forecasts is to test whether the loss from a benchmark prediction is smaller than the others. The test can be embedded into the general problem of testing functional inequalities using a one-sided Kolmogorov-Smirnov functional. This paper shows that such a test generally suffers from unstable power properties, meaning that the asymptotic power against certain local alternatives can be much smaller than the size. This paper proposes a general method to robustify the power properties. This method can also be applied to testing inequalities such as stochastic dominance and moment inequalities. Simulation studies demonstrate that tests based on this paper’s approach perform quite well relative to the existing methods.Inequality Restrictions, Testing Predictive Ability, One-sided Nonparametric Tests, Power Robustification

    Predictive Ability of QCD Sum Rules for Excited Baryons

    Get PDF
    The masses of octet baryons are calculated by the method of QCD sum rules. Using generalized interpolating fields, three independent sets of QCD sum rules are derived which allow the extraction of low-lying N* states with spin-parity 1/2+, 1/2- and 3/2- in both the non-strange and strange channels. The predictive ability of the sum rules is examined by a Monte-Carlo based analysis procedure in which the three phenomenological parameters (mass, coupling, threshold) are treated as free parameters simultaneously. Realistic uncertainties in these parameters are obtained by simultaneously exploring all uncertainties in the QCD input parameters. Those sum rules with good predictive power are identified and their predictions are compared with experiment where available.Comment: 33 pages, 2 figure

    Changes in Predictive Ability with Mixed Frequency Data

    Get PDF
    This paper proposes a new regression model - a smooth transition mixed data sampling (STMIDAS) approach - that captures recurrent changes in the ability of a high frequency variable in predicting a low frequency variable. The STMIDAS regression is employed for testing changes in the ability of financial variables in forecasting US output growth. The estimation of the optimal weights for aggregating weekly data inside the quarter improves the measurement of the predictive ability of the yield curve slope for output growth. Allowing for changes in the impact of the short-rate and the stock returns in future growth is decisive for finding in-sample and out-of-sample evidence of their predictive ability at horizons longer than one year.Smooth transition, MIDAS, Predictive ability, Asset prices, Output growth

    Recursive Predictability Tests for Real-Time Data

    Get PDF
    We propose a sequential test for predictive ability. The test is designed for recursive regressions in which the researcher is interested in recursively assessing whether some economic variables have predictive or explanatory content for another variable. It is common in the forecasting literature to assess predictive ability by using "one-shot" tests at each estimation period. We show that this practice: (i) leads to size distortions; (ii) selects overfitted models and provides spurious evidence of in-sample predictive ability; (iii) may lower the accuracy of the model selected by the test. The usefulness of the proposed test is shown in well-know empirical applications to the real-time predictive content of money for output, and the selection between linear and non-linear models.

    In-sample tests of predictive ability: a new approach

    Get PDF
    This paper presents analytical, Monte Carlo, and empirical evidence linking in-sample tests of predictive content and out-of-sample forecast accuracy. Our approach focuses on the negative effect that finite-sample estimation error has on forecast accuracy despite the presence of significant population-level predictive content. Specifically, we derive simple-to-use in-sample tests that test not only whether a particular variable has predictive content but also whether this content is estimated precisely enough to improve forecast accuracy. Our tests are asymptotically non-central chi-square or non-central normal. We provide a convenient bootstrap method for computing the relevant critical values. In the Monte Carlo and empirical analysis, we compare the effectiveness of our testing procedure with more common testing procedures.Economic forecasting

    Tests of equal predictive ability with real-time data

    Get PDF
    This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy applied to direct, multi-step predictions from both non-nested and nested linear regression models. In contrast to earlier work in the literature, our asymptotics take account of the real-time, revised nature of the data. Monte Carlo simulations indicate that our asymptotic approximations yield reasonable size and power properties in most circumstances. The paper concludes with an examination of the real-time predictive content of various measures of economic activity for inflation.Economic forecasting ; Real-time data

    PREMIUMS/DISCOUNTS AND PREDICTIVE ABILITY OF THE SHRIMP FUTURES MARKET

    Get PDF
    Seafood futures contracts are a novelty in the derivative markets, having shrimp as their only exponent. Unfortunately, shrimp futures contracts have suffered a disappointing start. The analyses focus on testing whether premiums/discounts for non-par deliverable shrimp size categories can eliminate cash price differentials, and whether the shrimp futures market can predict cash prices without bias. Results indicate ineffective premiums/discounts and predictive bias. These results and the momentous changes taking place in the seafood industry are contrasted to discuss the viability of seafood futures contracts.Agribusiness,

    In-sample tests of predictive ability: a new approach

    Get PDF
    This paper presents analytical, Monte Carlo, and empirical evidence linking in-sample tests of predictive content and out-of-sample forecast accuracy. Our approach focuses on the negative effect that finite-sample estimation error has on forecast accuracy despite the presence of significant population-level predictive content. Specifically, we derive simple-to-use in-sample tests that test not only whether a particular variable has predictive content but also whether this content is estimated precisely enough to improve forecast accuracy. Our tests are asymptotically non-central chi-square or non-central normal. We provide a convenient bootstrap method for computing the relevant critical values. In the Monte Carlo and empirical analysis, we compare the effectiveness of our testing procedure with more common testing procedures.
    corecore