1,556 research outputs found

    Critical Values for Cointegration Tests

    Get PDF
    This paper provides tables of critical values for some popular tests of cointegration and unit roots. Although these tables are necessarily based on computer simulations, they are much more accurate than those previously available. The results of the simulation experiments are summarized by means of response surface regressions in which critical values depend on the sample size. From these regressions, asymptotic critical values can be read off directly, and critical values for any finite sample size can easily be computed with a hand calculator. Added in 2010 version: A new appendix contains additional results that are more accurate and cover more cases than the ones in the original paper.unit root test, Dickey-Fuller test, Engle-Granger test, ADF test

    Bootstrap Methods in Econometrics

    Get PDF
    There are many bootstrap methods that can be used for econometric analysis. In certain circumstances, such as regression models with independent and identically distributed error terms, appropriately chosen bootstrap methods generally work very well. However, there are many other cases, such as regression models with dependent errors, in which bootstrap methods do not always work well. This paper discusses a large number of bootstrap methods that can be useful in econometrics. Applications to hypothesis testing are emphasized, and simulation results are presented for a few illustrative cases.bootstrap, Monte Carlo test, wild bootstrap, sieve bootstrap, moving block bootstrap

    Heteroskedasticity-robust tests for structural change

    Get PDF
    It is remarkably easy to test for structural change, of the type that the classic F or "Chow" test is designed to detect, in a manner that is robust to heteroskedasticity of possibly unknown form. This paper first discusses how to test for structural change in nonlinear regression models by using a variant of the Gauss-Newton regression. It then shows how to make these tests robust to heteroskedasticity of unknown form and discusses several related procedures for doing so. Finally, it presents the results of a number of Monte Carlo experiments designed to see how well the new tests perform in finite samples.Chow test, HCCME, heteroskedasticity, artificial regression, Gauss-Newton regression, GNR, structural break

    Model Specification Tests Against Non-Nested Alternatives

    Get PDF
    Non-nested hypothesis tests provide a way to test the specification of an econometric model against the evidence provided by one or more non-nested alternatives. This paper surveys the recent literature on non-nested hypothesis testing in the context of regression and related models. Much of the purely statistical literature which has evolved from the fundamental work of Cox is discussed briefly or not at all. Instead, emphasis is placed on those techniques which are easy to employ in practice and are likely to be useful to applied workers.Cox test, nonnested hypotheses, J test, specification tests, nonnested hypothesis test

    Bootstrap Hypothesis Testing

    Get PDF
    This paper surveys bootstrap and Monte Carlo methods for testing hypotheses in econometrics. Several different ways of computing bootstrap P values are discussed, including the double bootstrap and the fast double bootstrap. It is emphasized that there are many different procedures for generating bootstrap samples for regression models and other types of model. As an illustration, a simulation experiment examines the performance of several methods of bootstrapping the supF test for structural change with an unknown break point.bootstrap test, supF test, wild bootstrap, pairs bootstrap, moving block bootstrap, residual bootstrap, bootstrap P value

    Bootstrap inference in a linear equation estimated by instrumental variables

    Get PDF
    We study several tests for the coefficient of the single right-hand-side endogenous variable in a linear equation estimated by instrumental variables. We show that writing all the test statistics—Student's t, Anderson-Rubin, the LM statistic of Kleibergen and Moreira (K), and likelihood ratio (LR)—as functions of six random quantities leads to a number of interesting results about the properties of the tests under weakinstrument asymptotics. We then propose several new procedures for bootstrapping the three non-exact test statistics and also a new conditional bootstrap version of the LR test. These use more efficient estimates of the parameters of the reduced-form equation than existing procedures. When the best of these new procedures is used, both the K and conditional bootstrap LR tests have excellent performance under the null. However, power considerations suggest that the latter is probably the method of choice.bootstrap, weak instruments, IV estimation

    Moments of IV and JIVE estimators

    Get PDF
    We develop a method based on the use of polar coordinates to investigate the existence of moments for instrumental variables and related estimators in the linear regression model. For generalized IV estimators, we obtain familiar results. For JIVE, we obtain the new result that this estimator has no moments at all. Simulation results illustrate the consequences of its lack of moments.instrumental variables, JIVE, moments of estimators

    Improving the reliability of bootstrap tests with the fast double bootstrap

    Get PDF
    Two procedures are proposed for estimating the rejection probabilities of bootstrap tests in Monte Carlo experiments without actually computing a bootstrap test for each replication. These procedures are only about twice as expensive (per replication) as estimating rejection probabilities for asymptotic tests. Then a new procedure is proposed for computing bootstrap P values that will often be more accurate than ordinary ones. This “fast double bootstrap” is closely related to the double bootstrap, but it is far less computationally demanding. Simulation results for three different cases suggest that the fast double bootstrap can be very useful in practice.Bootstrap

    Wild bootstrap tests for IV regression

    Get PDF
    We propose a wild bootstrap procedure for linear regression models estimated by instrumental variables. Like other bootstrap procedures that we have proposed elsewhere, it uses efficient estimates of the reduced-form equation(s). Unlike them, it takes account of possible heteroskedasticity of unknown form. We apply this procedure to t tests, including heteroskedasticity-robust t tests, and to the Anderson-Rubin test. We provide simulation evidence that it works far better than older methods, such as the pairs bootstrap. We also show how to obtain reliable confidence intervals by inverting bootstrap tests. An empirical example illustrates the utility of these procedures.Instrumental variables estimation, two-stage least squares, weak instruments, wild bootstrap, pairs bootstrap, residual bootstrap, confidence intervals, Anderson-Rubin test

    The Power of Bootstrap and Asymptotic Tests

    Get PDF
    We introduce the concept of the bootstrap discrepancy, which measures the difference in rejection probabilities between a bootstrap test based on a given test statistic and that of a (usually infeasible) test based on the true distribution of the statistic. We show that the bootstrap discrepancy is of the same order of magnitude under the null hypothesis and under non-null processes described by a Pitman drift. However, complications arise in the measurement of power. If the test statistic is not an exact pivot, critical values depend on which data-generating process (DGP) is used to determine the distribution under the null hypothesis. We propose as the proper choice the DGP which minimizes the bootstrap discrepancy. We also show that, under an asymptotic independence condition, the power of both bootstrap and asymptotic tests can be estimated cheaply by simulation. The theory of the paper and the proposed simulation method are illustrated by Monte Carlo experiments using the logit model.bootstrap test, bootstrap discrepancy, Pitman drift, drifting DGP, Monte Carlo, test power, power, asymptotic test
    corecore