19,782 research outputs found

    Variable selection in semiparametric regression modeling

    Full text link
    In this paper, we are concerned with how to select significant variables in semiparametric modeling. Variable selection for semiparametric regression models consists of two components: model selection for nonparametric components and selection of significant variables for the parametric portion. Thus, semiparametric variable selection is much more challenging than parametric variable selection (e.g., linear and generalized linear models) because traditional variable selection procedures including stepwise regression and the best subset selection now require separate model selection for the nonparametric components for each submodel. This leads to a very heavy computational burden. In this paper, we propose a class of variable selection procedures for semiparametric regression models using nonconcave penalized likelihood. We establish the rate of convergence of the resulting estimate. With proper choices of penalty functions and regularization parameters, we show the asymptotic normality of the resulting estimate and further demonstrate that the proposed procedures perform as well as an oracle procedure. A semiparametric generalized likelihood ratio test is proposed to select significant variables in the nonparametric component. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null distribution follows a chi-square distribution which is independent of the nuisance parameters. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed variable selection procedures.Comment: Published in at http://dx.doi.org/10.1214/009053607000000604 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bias in parametric estimation: reduction and useful side-effects

    Get PDF
    The bias of an estimator is defined as the difference of its expected value from the parameter to be estimated, where the expectation is with respect to the model. Loosely speaking, small bias reflects the desire that if an experiment is repeated indefinitely then the average of all the resultant estimates will be close to the parameter value that is estimated. The current paper is a review of the still-expanding repository of methods that have been developed to reduce bias in the estimation of parametric models. The review provides a unifying framework where all those methods are seen as attempts to approximate the solution of a simple estimating equation. Of particular focus is the maximum likelihood estimator, which despite being asymptotically unbiased under the usual regularity conditions, has finite-sample bias that can result in significant loss of performance of standard inferential procedures. An informal comparison of the methods is made revealing some useful practical side-effects in the estimation of popular models in practice including: i) shrinkage of the estimators in binomial and multinomial regression models that guarantees finiteness even in cases of data separation where the maximum likelihood estimator is infinite, and ii) inferential benefits for models that require the estimation of dispersion or precision parameters

    Functional Structure and Approximation in Econometrics (book front matter)

    Get PDF
    This is the front matter from the book, William A. Barnett and Jane Binner (eds.), Functional Structure and Approximation in Econometrics, published in 2004 by Elsevier in its Contributions to Economic Analysis monograph series. The front matter includes the Table of Contents, Volume Introduction, and Section Introductions by Barnett and Binner and the Preface by W. Erwin Diewert. The volume contains a unified collection and discussion of W. A. Barnett's most important published papers on applied and theoretical econometric modelling.consumer demand, production, flexible functional form, functional structure, asymptotics, nonlinearity, systemwide models

    Point estimation with exponentially tilted empirical likelihood

    Full text link
    Parameters defined via general estimating equations (GEE) can be estimated by maximizing the empirical likelihood (EL). Newey and Smith [Econometrica 72 (2004) 219--255] have recently shown that this EL estimator exhibits desirable higher-order asymptotic properties, namely, that its O(n−1)O(n^{-1}) bias is small and that bias-corrected EL is higher-order efficient. Although EL possesses these properties when the model is correctly specified, this paper shows that, in the presence of model misspecification, EL may cease to be root n convergent when the functions defining the moment conditions are unbounded (even when their expectations are bounded). In contrast, the related exponential tilting (ET) estimator avoids this problem. This paper shows that the ET and EL estimators can be naturally combined to yield an estimator called exponentially tilted empirical likelihood (ETEL) exhibiting the same O(n−1)O(n^{-1}) bias and the same O(n−2)O(n^{-2}) variance as EL, while maintaining root n convergence under model misspecification.Comment: Published at http://dx.doi.org/10.1214/009053606000001208 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • 

    corecore