5,532 research outputs found

    Data-driven rate-optimal specification testing in regression models

    Get PDF
    We propose new data-driven smooth tests for a parametric regression function. The smoothing parameter is selected through a new criterion that favors a large smoothing parameter under the null hypothesis. The resulting test is adaptive rate-optimal and consistent against Pitman local alternatives approaching the parametric model at a rate arbitrarily close to 1/\sqrtn. Asymptotic critical values come from the standard normal distribution and the bootstrap can be used in small samples. A general formalization allows one to consider a large class of linear smoothing methods, which can be tailored for detection of additive alternatives.Comment: Published at http://dx.doi.org/10.1214/009053604000001200 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Conditional Moment Models under Semi-Strong Identification

    Get PDF
    We consider models defined by conditional moment restrictions under semi-strong identification. Identification strength is directly defined through the conditional mo- ments that flatten as the sample size increases. The framework allows for different iden- tification strengths across parameter’s components. We propose a minimum distance estimator that is robust to semi-strong identification and does not rely on the choice of a user-chosen parameter, such as the number of instruments or any other smoothing parameter. Our method yields consistent and asymptotically normal estimators of each parameter’s components. Heteroskedasticity-robust inference is possible through Wald testing without prior knowledge of the identification pattern. In simulations, we find that our estimator is competitive with alternative estimators based on many instruments. In particular, it is well-centered with better coverage rates for confidence intervals.Asset Markets, Uncertainty, Experimental Economics

    One for all and all for one: regression checks with many regressors

    Get PDF
    We develop a novel approach to build checks of parametric regression models when many regressors are present, based on a class of sufficiently rich semiparametric alternatives, namely single-index models. We propose an omnibus test based on the kernel method that performs against a sequence of directional nonparametric alternatives as if there was one regressor only, whatever the number of regressors. This test can be viewed as a smooth version of the integrated conditional moment (ICM) test of Bierens. Qualitative information can be easily incorporated into the procedure to enhance power. In an extensive comparative simulation study, we find that our test is little sensitive to the smoothing parameter and performs well in multidimensional settings. We then apply it to a cross-country growth regression model.Dimensionality, Hypothesis testing, Nonparametric methods

    One for All and All for One:Regression Checks With Many Regressors

    Get PDF
    We develop a novel approach to build checks of parametric regression models when many regressors are present, based on a class of rich enough semiparametric alternatives, namely single-index models. We propose an omnibus test based on the kernel method that performs against a sequence of directional nonparametric alternatives as if there was one regressor only, whatever the number of regressors. This test can be viewed as a smooth version of the integrated conditional moment (ICM) test of Bierens. Qualitative information can be easily incorporated in the procedure to enhance power. Our test is little sensitive to the smoothing parameter and performs better than several known lack-of-fit tests in multidimensional settings, as illustrated by extensive simulations and an application to a cross-country growth regression.Dimensionality, Hypothesis testing, Nonparametric methods

    Powerful nonparametric checks for quantile regression

    Get PDF
    We address the issue of lack-of-fit testing for a parametric quantile regression. We propose a simple test that involves one-dimensional kernel smoothing, so that the rate at which it detects local alternatives is independent of the number of covariates. The test has asymptotically gaussian critical values, and wild bootstrap can be applied to obtain more accurate ones in small samples. Our procedure appears to be competitive with existing ones in simulations. We illustrate the usefulness of our test on birthweight data.Comment: 32 pages, 2 figure

    A Significance Test for Covariates in Nonparametric Regression

    Get PDF
    We consider testing the significance of a subset of covariates in a nonparametric regression. These covariates can be continuous and/or discrete. We propose a new kernel-based test that smoothes only over the covariates appearing under the null hypothesis, so that the curse of dimensionality is mitigated. The test statistic is asymptotically pivotal and the rate of which the test detects local alternatives depends only on the dimension of the covariates under the null hypothesis. We show the validity of wild bootstrap for the test. In small samples, our test is competitive compared to existing procedures.Comment: 42 pages, 6 figure

    DATA-DRIVEN RATE-OPTIMAL SPECIFICATION TESTING IN REGRESSION MODELS

    Get PDF
    We propose new data-driven smooth tests for a parametric regression function. The smoothing parameter is selected through a new criterion that favors a large smoothing parameter under the null hypothesis. The resulting test is adaptive rate-optimal and consistent against Pitman local alternatives approaching the parametric model at a rate arbitrarily close to 1/\sqrt{n}. Asymptotic critical values come from the standard normal distribution and bootstrap can be used in small samples. A general formalization allows to consider a large class of linear smoothing methods, which can be tailored for detection of additive alternatives.Hypothesis testing, nonparametric adaptive tests, selection methods

    Efficient Learning of Sparse Conditional Random Fields for Supervised Sequence Labelling

    Full text link
    Conditional Random Fields (CRFs) constitute a popular and efficient approach for supervised sequence labelling. CRFs can cope with large description spaces and can integrate some form of structural dependency between labels. In this contribution, we address the issue of efficient feature selection for CRFs based on imposing sparsity through an L1 penalty. We first show how sparsity of the parameter set can be exploited to significantly speed up training and labelling. We then introduce coordinate descent parameter update schemes for CRFs with L1 regularization. We finally provide some empirical comparisons of the proposed approach with state-of-the-art CRF training strategies. In particular, it is shown that the proposed approach is able to take profit of the sparsity to speed up processing and hence potentially handle larger dimensional models
    corecore