468 research outputs found

    Testing a parametric model against a nonparametric alternative with identification through instrumental variables

    Get PDF
    This paper is concerned with inference about a function g that is identified by a conditional moment restriction involving instrumental variables. The paper presents a test of the hypothesis that g belongs to a finite-dimensional parametric family against a nonparametric alternative. The test does not require nonparametric estimation of g and is not subject to the illposed inverse problem of nonparametric instrumental variables estimation. Under mild conditions, the test is consistent against any alternative model and has asymptotic power advantages over existing tests. Moreover, it has power arbitrarily close to 1 uniformly over a class of alternatives whose distance from the null hypothesis is O(n-1/2), where n is the sample size.

    Semiparametric models

    Get PDF
    Much empirical research is concerned with estimating conditional mean, median, or hazard functions. For example, labor economists are interested in estimating the mean wages of employed individuals conditional on characteristics such as years of work experience and education. The most frequently used estimation methods assume that the function of interest is known up to a set of constant parameters that can be estimated from data. Models in which the only unknown quantities are a finite set of constant parameters are called parametric. The use of a parametric model greatly simplifies estimation, statistical inference, and interpretation of the estimation results but is rarely justified by theoretical or other a priori considerations. Estimation and inference based on convenient but incorrect assumptions about the form of the conditional mean function can be highly misleading. --

    Nonparametric methods for inference in the presence of instrumental variables

    Full text link
    We suggest two nonparametric approaches, based on kernel methods and orthogonal series to estimating regression functions in the presence of instrumental variables. For the first time in this class of problems, we derive optimal convergence rates, and show that they are attained by particular estimators. In the presence of instrumental variables the relation that identifies the regression function also defines an ill-posed inverse problem, the ``difficulty'' of which depends on eigenvalues of a certain integral operator which is determined by the joint density of endogenous and instrumental variables. We delineate the role played by problem difficulty in determining both the optimal convergence rate and the appropriate choice of smoothing parameter.Comment: Published at http://dx.doi.org/10.1214/009053605000000714 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonparametric Estimation of an Additive Model With a Link Function

    Get PDF
    This paper describes an estimator of the additive components of a nonparametric additive model with a known link function. When the additive components are twice continuously differentiable, the estimator is asymptotically normally distributed with a rate of convergence in probability of n^{-2/5}. This is true regardless of the (finite) dimension of the explanatory variable. Thus, in contrast to the existing asymptotically normal estimator, the new estimator has no curse of dimensionality. Moreover, the estimator has an oracle property. The asymptotic distribution of each additive component is the same as it would be if the other components were known with certainty.Comment: Published at http://dx.doi.org/10.1214/009053604000000814 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Methodology and convergence rates for functional linear regression

    Full text link
    In functional linear regression, the slope ``parameter'' is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an ill-posed problem and has points of contact with a range of methodologies, including statistical smoothing and deconvolution. The standard approach to estimating the slope function is based explicitly on functional principal components analysis and, consequently, on spectral decomposition in terms of eigenvalues and eigenfunctions. We discuss this approach in detail and show that in certain circumstances, optimal convergence rates are achieved by the PCA technique. An alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view.Comment: Published at http://dx.doi.org/10.1214/009053606000000957 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonparametric estimation of an additive model with a link function

    Get PDF
    This paper describes an estimator of the additive components of a nonparametric additive model with a known link function. When the additive components are twice continuously differentiable, the estimator is asymptotically normally distributed with a rate of convergence in probability of n-2/5. This is true regardless of the (finite) dimension of the explanatory variable. Thus, in contrast to the existing asymptotically normal estimator, the new estimator has no curse of dimensionality. Moreover, the asymptotic distribution of each additive component is the same as it would be if the other components were known with certainty.

    Semidefinite Relaxations for Stochastic Optimal Control Policies

    Full text link
    Recent results in the study of the Hamilton Jacobi Bellman (HJB) equation have led to the discovery of a formulation of the value function as a linear Partial Differential Equation (PDE) for stochastic nonlinear systems with a mild constraint on their disturbances. This has yielded promising directions for research in the planning and control of nonlinear systems. This work proposes a new method obtaining approximate solutions to these linear stochastic optimal control (SOC) problems. A candidate polynomial with variable coefficients is proposed as the solution to the SOC problem. A Sum of Squares (SOS) relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function.Comment: Preprint. Accepted to American Controls Conference (ACC) 2014 in Portland, Oregon. 7 pages, colo
    • 

    corecore