131 research outputs found

    Data-driven rate-optimal specification testing in regression models

    Get PDF
    We propose new data-driven smooth tests for a parametric regression function. The smoothing parameter is selected through a new criterion that favors a large smoothing parameter under the null hypothesis. The resulting test is adaptive rate-optimal and consistent against Pitman local alternatives approaching the parametric model at a rate arbitrarily close to 1/\sqrtn. Asymptotic critical values come from the standard normal distribution and the bootstrap can be used in small samples. A general formalization allows one to consider a large class of linear smoothing methods, which can be tailored for detection of additive alternatives.Comment: Published at http://dx.doi.org/10.1214/009053604000001200 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Conditional Moment Models under Semi-Strong Identification

    Get PDF
    We consider models defined by conditional moment restrictions under semi-strong identification. Identification strength is directly defined through the conditional mo- ments that flatten as the sample size increases. The framework allows for different iden- tification strengths across parameter’s components. We propose a minimum distance estimator that is robust to semi-strong identification and does not rely on the choice of a user-chosen parameter, such as the number of instruments or any other smoothing parameter. Our method yields consistent and asymptotically normal estimators of each parameter’s components. Heteroskedasticity-robust inference is possible through Wald testing without prior knowledge of the identification pattern. In simulations, we find that our estimator is competitive with alternative estimators based on many instruments. In particular, it is well-centered with better coverage rates for confidence intervals.Asset Markets, Uncertainty, Experimental Economics

    One for all and all for one: regression checks with many regressors

    Get PDF
    We develop a novel approach to build checks of parametric regression models when many regressors are present, based on a class of sufficiently rich semiparametric alternatives, namely single-index models. We propose an omnibus test based on the kernel method that performs against a sequence of directional nonparametric alternatives as if there was one regressor only, whatever the number of regressors. This test can be viewed as a smooth version of the integrated conditional moment (ICM) test of Bierens. Qualitative information can be easily incorporated into the procedure to enhance power. In an extensive comparative simulation study, we find that our test is little sensitive to the smoothing parameter and performs well in multidimensional settings. We then apply it to a cross-country growth regression model.Dimensionality, Hypothesis testing, Nonparametric methods

    One for All and All for One:Regression Checks With Many Regressors

    Get PDF
    We develop a novel approach to build checks of parametric regression models when many regressors are present, based on a class of rich enough semiparametric alternatives, namely single-index models. We propose an omnibus test based on the kernel method that performs against a sequence of directional nonparametric alternatives as if there was one regressor only, whatever the number of regressors. This test can be viewed as a smooth version of the integrated conditional moment (ICM) test of Bierens. Qualitative information can be easily incorporated in the procedure to enhance power. Our test is little sensitive to the smoothing parameter and performs better than several known lack-of-fit tests in multidimensional settings, as illustrated by extensive simulations and an application to a cross-country growth regression.Dimensionality, Hypothesis testing, Nonparametric methods

    DATA-DRIVEN RATE-OPTIMAL SPECIFICATION TESTING IN REGRESSION MODELS

    Get PDF
    We propose new data-driven smooth tests for a parametric regression function. The smoothing parameter is selected through a new criterion that favors a large smoothing parameter under the null hypothesis. The resulting test is adaptive rate-optimal and consistent against Pitman local alternatives approaching the parametric model at a rate arbitrarily close to 1/\sqrt{n}. Asymptotic critical values come from the standard normal distribution and bootstrap can be used in small samples. A general formalization allows to consider a large class of linear smoothing methods, which can be tailored for detection of additive alternatives.Hypothesis testing, nonparametric adaptive tests, selection methods

    Powerful nonparametric checks for quantile regression

    Get PDF
    We address the issue of lack-of-fit testing for a parametric quantile regression. We propose a simple test that involves one-dimensional kernel smoothing, so that the rate at which it detects local alternatives is independent of the number of covariates. The test has asymptotically gaussian critical values, and wild bootstrap can be applied to obtain more accurate ones in small samples. Our procedure appears to be competitive with existing ones in simulations. We illustrate the usefulness of our test on birthweight data.Comment: 32 pages, 2 figure

    A Significance Test for Covariates in Nonparametric Regression

    Get PDF
    We consider testing the significance of a subset of covariates in a nonparametric regression. These covariates can be continuous and/or discrete. We propose a new kernel-based test that smoothes only over the covariates appearing under the null hypothesis, so that the curse of dimensionality is mitigated. The test statistic is asymptotically pivotal and the rate of which the test detects local alternatives depends only on the dimension of the covariates under the null hypothesis. We show the validity of wild bootstrap for the test. In small samples, our test is competitive compared to existing procedures.Comment: 42 pages, 6 figure

    Model Equivalence Tests in a Parametric Framework

    Get PDF
    In empirical research, one commonly aims to obtain evidence in favor of re- strictions on parameters, appearing as an economic hypothesis, a consequence of economic theory, or an econometric modeling assumption. I propose a new theoret- ical framework based on the Kullback-Leibler information to assess the approximate validity of multivariate restrictions in parametric models. I construct tests that are locally asymptotically maximin and locally asymptotically uniformly most powerful invariant. The tests are applied to three different empirical problems

    Assessing the Approximate Validity of Moment Restrictions

    Get PDF
    We propose a new theoretical framework to assess the approximate validity of overidentifying moment restrictions. Their validity is evaluated by the divergence between the true probability measure and the closest measure that imposes the moment restrictions of interest. The divergence can be chosen as any of the Cressie-Read family. The considered alternative hypothesis states that the divergence is smaller than some user-chosen tolerance. Tests are constructed based on the minimum empirical divergence that attain the local semiparametric power envelope of invariant tests. We show how the tolerance can be chosen by reformulating the hypothesis under test as a set of admissible misspecifications. Two empirical applications illustrate the practical usefulness of the new tests for providing evidence on the potential extent of misspecification
    corecore