284,726 research outputs found

    Nonparametric and Varying Coefficient Modal Regression

    Full text link
    In this article, we propose a new nonparametric data analysis tool, which we call nonparametric modal regression, to investigate the relationship among interested variables based on estimating the mode of the conditional density of a response variable Y given predictors X. The nonparametric modal regression is distinguished from the conventional nonparametric regression in that, instead of the conditional average or median, it uses the "most likely" conditional values to measures the center. Better prediction performance and robustness are two important characteristics of nonparametric modal regression compared to traditional nonparametric mean regression and nonparametric median regression. We propose to use local polynomial regression to estimate the nonparametric modal regression. The asymptotic properties of the resulting estimator are investigated. To broaden the applicability of the nonparametric modal regression to high dimensional data or functional/longitudinal data, we further develop a nonparametric varying coefficient modal regression. A Monte Carlo simulation study and an analysis of health care expenditure data demonstrate some superior performance of the proposed nonparametric modal regression model to the traditional nonparametric mean regression and nonparametric median regression in terms of the prediction performance.Comment: 33 page

    Computationally Efficient Nonparametric Importance Sampling

    Full text link
    The variance reduction established by importance sampling strongly depends on the choice of the importance sampling distribution. A good choice is often hard to achieve especially for high-dimensional integration problems. Nonparametric estimation of the optimal importance sampling distribution (known as nonparametric importance sampling) is a reasonable alternative to parametric approaches.In this article nonparametric variants of both the self-normalized and the unnormalized importance sampling estimator are proposed and investigated. A common critique on nonparametric importance sampling is the increased computational burden compared to parametric methods. We solve this problem to a large degree by utilizing the linear blend frequency polygon estimator instead of a kernel estimator. Mean square error convergence properties are investigated leading to recommendations for the efficient application of nonparametric importance sampling. Particularly, we show that nonparametric importance sampling asymptotically attains optimal importance sampling variance. The efficiency of nonparametric importance sampling algorithms heavily relies on the computational efficiency of the employed nonparametric estimator. The linear blend frequency polygon outperforms kernel estimators in terms of certain criteria such as efficient sampling and evaluation. Furthermore, it is compatible with the inversion method for sample generation. This allows to combine our algorithms with other variance reduction techniques such as stratified sampling. Empirical evidence for the usefulness of the suggested algorithms is obtained by means of three benchmark integration problems. As an application we estimate the distribution of the queue length of a spam filter queueing system based on real data.Comment: 29 pages, 7 figure

    Sparse Additive Models

    Full text link
    We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. An analysis of the theoretical properties of SpAM is given. We also study a greedy estimator that is a nonparametric version of forward stepwise regression. Empirical results on synthetic and real data are presented, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data

    Pointwise universal consistency of nonparametric linear estimators

    Get PDF
    This paper presents sufficient conditions for pointwise universal consistency of nonparametric delta estimators. We show the applicability of these conditions for some classes of nonparametric estimators

    Semi-parametric regression: Efficiency gains from modeling the nonparametric part

    Full text link
    It is widely admitted that structured nonparametric modeling that circumvents the curse of dimensionality is important in nonparametric estimation. In this paper we show that the same holds for semi-parametric estimation. We argue that estimation of the parametric component of a semi-parametric model can be improved essentially when more structure is put into the nonparametric part of the model. We illustrate this for the partially linear model, and investigate efficiency gains when the nonparametric part of the model has an additive structure. We present the semi-parametric Fisher information bound for estimating the parametric part of the partially linear additive model and provide semi-parametric efficient estimators for which we use a smooth backfitting technique to deal with the additive nonparametric part. We also present the finite sample performances of the proposed estimators and analyze Boston housing data as an illustration.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ296 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
    • …
    corecore