20 research outputs found

    Additive isotone regression

    Full text link
    This paper is about optimal estimation of the additive components of a nonparametric, additive isotone regression model. It is shown that asymptotically up to first order, each additive component can be estimated as well as it could be by a least squares estimator if the other components were known. The algorithm for the calculation of the estimator uses backfitting. Convergence of the algorithm is shown. Finite sample properties are also compared through simulation experiments.Comment: Published at http://dx.doi.org/10.1214/074921707000000355 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Smooth backfitting in generalized additive models

    Full text link
    Generalized additive models have been popular among statisticians and data analysts in multivariate nonparametric regression with non-Gaussian responses including binary and count data. In this paper, a new likelihood approach for fitting generalized additive models is proposed. It aims to maximize a smoothed likelihood. The additive functions are estimated by solving a system of nonlinear integral equations. An iterative algorithm based on smooth backfitting is developed from the Newton--Kantorovich theorem. Asymptotic properties of the estimator and convergence of the algorithm are discussed. It is shown that our proposal based on local linear fit achieves the same bias and variance as the oracle estimator that uses knowledge of the other components. Numerical comparison with the recently proposed two-stage estimator [Ann. Statist. 32 (2004) 2412--2443] is also made.Comment: Published in at http://dx.doi.org/10.1214/009053607000000596 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Semi-parametric regression: Efficiency gains from modeling the nonparametric part

    Full text link
    It is widely admitted that structured nonparametric modeling that circumvents the curse of dimensionality is important in nonparametric estimation. In this paper we show that the same holds for semi-parametric estimation. We argue that estimation of the parametric component of a semi-parametric model can be improved essentially when more structure is put into the nonparametric part of the model. We illustrate this for the partially linear model, and investigate efficiency gains when the nonparametric part of the model has an additive structure. We present the semi-parametric Fisher information bound for estimating the parametric part of the partially linear additive model and provide semi-parametric efficient estimators for which we use a smooth backfitting technique to deal with the additive nonparametric part. We also present the finite sample performances of the proposed estimators and analyze Boston housing data as an illustration.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ296 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Within-group fairness: A guidance for more sound between-group fairness

    Full text link
    As they have a vital effect on social decision-making, AI algorithms not only should be accurate and but also should not pose unfairness against certain sensitive groups (e.g., non-white, women). Various specially designed AI algorithms to ensure trained AI models to be fair between sensitive groups have been developed. In this paper, we raise a new issue that between-group fair AI models could treat individuals in a same sensitive group unfairly. We introduce a new concept of fairness so-called within-group fairness which requires that AI models should be fair for those in a same sensitive group as well as those in different sensitive groups. We materialize the concept of within-group fairness by proposing corresponding mathematical definitions and developing learning algorithms to control within-group fairness and between-group fairness simultaneously. Numerical studies show that the proposed learning algorithms improve within-group fairness without sacrificing accuracy as well as between-group fairness

    Expansion for Moments of Regression Quantiles with Applications to Nonparametric Testing

    No full text
    We discuss nonparametric tests for parametric specifications of regression quantiles. The test is based on the comparison of parametric and nonparametric fits of these quantiles. The nonparametric fit is a Nadaraya-Watson quantile smoothing estimator. An asymptotic treatment of the test statistic requires the development of new mathematical arguments. An approach that makes only use of plugging in a Bahadur expansion of the nonparametric estimator is not satisfactory. It requires too strong conditions on the dimension and the choice of the bandwidth. Our alternative mathematical approach requires the calculation of moments of Bahadur expansions of Nadaraya-Watson quantile regression estimators. This calculation is done by inverting the problem and application of higher order Edgeworth expansions. The moments allow estimation bounds for the accuracy of Bahadur expansions for integrals of kernel quantile estimators. Another application of our method gives asymptotic results for the estimation of weighted averages of regression quantiles

    Expansion for moments of regression quantiles with applications to nonparametric testing

    No full text
    corecore