28,587 research outputs found

    Adaptive, Rate-Optimal Hypothesis Testing in Nonparametric IV Models

    Get PDF
    We propose a new adaptive hypothesis test for polyhedral cone (e.g., monotonicity, convexity) and equality (e.g., parametric, semiparametric) restrictions on a structural function in a nonparametric instrumental variables (NPIV) model. Our test statistic is based on a modified leave-one-out sample analog of a quadratic distance between the restricted and unrestricted sieve NPIV estimators. We provide computationally simple, data-driven choices of sieve tuning parameters and adjusted chi-squared critical values. Our test adapts to the unknown smoothness of alternative functions in the presence of unknown degree of endogeneity and unknown strength of the instruments. It attains the adaptive minimax rate of testing in L2L^2. That is, the sum of its type I error uniformly over the composite null and its type II error uniformly over nonparametric alternative models cannot be improved by any other hypothesis test for NPIV models of unknown regularities. Data-driven confidence sets in L2L^2 are obtained by inverting the adaptive test. Simulations confirm that our adaptive test controls size and its finite-sample power greatly exceeds existing non-adaptive tests for monotonicity and parametric restrictions in NPIV models. Empirical applications to test for shape restrictions of differentiated products demand and of Engel curves are presented

    Adaptive, Rate-Optimal Hypothesis Testing in Nonparametric IV Models

    Get PDF
    We propose a new adaptive hypothesis test for polyhedral cone (e.g., monotonicity, convexity) and equality (e.g., parametric, semiparametric) restrictions on a structural function in a nonparametric instrumental variables (NPIV) model. Our test statistic is based on a modified leave-one-out sample analog of a quadratic distance between the restricted and unrestricted sieve NPIV estimators. We provide computationally simple, data-driven choices of sieve tuning parameters and adjusted chi-squared critical values. Our test adapts to the unknown smoothness of alternative functions in the presence of unknown degree of endogeneity and unknown strength of the instruments. It attains the adaptive minimax rate of testing in L2. That is, the sum of its type I error uniformly over the composite null and its type II error uniformly over nonparametric alternative models cannot be improved by any other hypothesis test for NPIV models of unknown regularities. Data-driven confidence sets in L2 are obtained by inverting the adaptive test. Simulations con rm that our adaptive test controls size and its nite-sample power greatly exceeds existing non-adaptive tests for monotonicity and parametric restrictions in NPIV models. Empirical applications to test for shape restrictions of differentiated products demand and of Engel curves are presented

    DOES CONSISTENT AGGREGATION REALLY MATTER?

    Get PDF
    Consistent aggregation assures that behavioral properties, which apply to disaggregate relationships also, apply to aggregate relationships. The agricultural economics literature is reviewed which has tested for consistent aggregation or measured statistical bias and/or inferential errors due to aggregation. Tests for aggregation bias and errors of inference are conducted using indices previously tested for consistent aggregation. Failure to reject consistent aggregation in a partition did not entirely mitigate erroneous inference due to aggregation. However, inferential errors due to aggregation were small relative to errors due to incorrect functional form or failure to account for time series properties of data.Research Methods/ Statistical Methods,

    Quadratic distances on probabilities: A unified foundation

    Full text link
    This work builds a unified framework for the study of quadratic form distance measures as they are used in assessing the goodness of fit of models. Many important procedures have this structure, but the theory for these methods is dispersed and incomplete. Central to the statistical analysis of these distances is the spectral decomposition of the kernel that generates the distance. We show how this determines the limiting distribution of natural goodness-of-fit tests. Additionally, we develop a new notion, the spectral degrees of freedom of the test, based on this decomposition. The degrees of freedom are easy to compute and estimate, and can be used as a guide in the construction of useful procedures in this class.Comment: Published in at http://dx.doi.org/10.1214/009053607000000956 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Does consistent aggregation really matter?

    Get PDF
    Consistent aggregation ensures that behavioural properties which apply to disaggregate relationships apply also to aggregate relationships. The agricultural economics literature which has tested for consistent aggregation or measured statistical bias and/or inferential errors due to aggregation is reviewed. Tests for aggregation bias and errors of inference are conducted using indices previously tested for consistent aggregation. Failure to reject consistent aggregation in a partition did not entirely mitigate erroneous inference due to aggregation. However, inferential errors due to aggregation were small relative to errors due to incorrect functional form or failure to account for time series properties of data.Research Methods/ Statistical Methods,

    An overview of the goodness-of-fit test problem for copulas

    Full text link
    We review the main "omnibus procedures" for goodness-of-fit testing for copulas: tests based on the empirical copula process, on probability integral transformations, on Kendall's dependence function, etc, and some corresponding reductions of dimension techniques. The problems of finding asymptotic distribution-free test statistics and the calculation of reliable p-values are discussed. Some particular cases, like convenient tests for time-dependent copulas, for Archimedean or extreme-value copulas, etc, are dealt with. Finally, the practical performances of the proposed approaches are briefly summarized

    Minimum scoring rule inference

    Full text link
    Proper scoring rules are methods for encouraging honest assessment of probability distributions. Just like likelihood, a proper scoring rule can be applied to supply an unbiased estimating equation for any statistical model, and the theory of such equations can be applied to understand the properties of the associated estimator. In this paper we develop some basic scoring rule estimation theory, and explore robustness and interval estimation properties by means of theory and simulations.Comment: 27 pages, 3 figure
    corecore