38,994 research outputs found

    new test for the parametric form of the variance function in nonparametric regression

    Get PDF
    In the common nonparametric regression model the problem of testing for the parametric form of the conditional variance is considered. A stochastic process based on the difference between the empirical processes obtained from the standardized nonparametric residuals under the null hypothesis (of a specific parametric form of the variance function) and the alternative is introduced and its weak convergence established. This result is used for the construction of a Cramer von Mises type statistic for testing the parametric form of the conditional variance. The finite sample properties of a bootstrap version of this test are investigated by means of a simulation study. In particular the new procedure is compared with some of the currently available methods for this problem and its performance is illustrated by means of a data example. --Bootstrap ; Kernel estimation ; Nonparametric regression ; Residual distribution ; Testing heteroscedasticity ; Testing homoscedasticity

    Autoregressive time series prediction by means of fuzzy inference systems using nonparametric residual variance estimation

    Get PDF
    We propose an automatic methodology framework for short- and long-term prediction of time series by means of fuzzy inference systems. In this methodology, fuzzy techniques and statistical techniques for nonparametric residual variance estimation are combined in order to build autoregressive predictive models implemented as fuzzy inference systems. Nonparametric residual variance estimation plays a key role in driving the identification and learning procedures. Concrete criteria and procedures within the proposed methodology framework are applied to a number of time series prediction problems. The learn from examples method introduced by Wang and Mendel (W&M) is used for identification. The Levenberg–Marquardt (L–M) optimization method is then applied for tuning. The W&M method produces compact and potentially accurate inference systems when applied after a proper variable selection stage. The L–M method yields the best compromise between accuracy and interpretability of results, among a set of alternatives. Delta test based residual variance estimations are used in order to select the best subset of inputs to the fuzzy inference systems as well as the number of linguistic labels for the inputs. Experiments on a diverse set of time series prediction benchmarks are compared against least-squares support vector machines (LS-SVM), optimally pruned extreme learning machine (OP-ELM), and k-NN based autoregressors. The advantages of the proposed methodology are shown in terms of linguistic interpretability, generalization capability and computational cost. Furthermore, fuzzy models are shown to be consistently more accurate for prediction in the case of time series coming from real-world applications.Ministerio de Ciencia e Innovación TEC2008-04920Junta de Andalucía P08-TIC-03674, IAC07-I-0205:33080, IAC08-II-3347:5626

    Effect of mean on variance function estimation in nonparametric regression

    Get PDF
    Variance function estimation in nonparametric regression is considered and the minimax rate of convergence is derived. We are particularly interested in the effect of the unknown mean on the estimation of the variance function. Our results indicate that, contrary to the common practice, it is not desirable to base the estimator of the variance function on the residuals from an optimal estimator of the mean when the mean function is not smooth. Instead it is more desirable to use estimators of the mean with minimal bias. On the other hand, when the mean function is very smooth, our numerical results show that the residual-based method performs better, but not substantial better than the first-order-difference-based estimator. In addition our asymptotic results also correct the optimal rate claimed in Hall and Carroll [J. Roy. Statist. Soc. Ser. B 51 (1989) 3--14].Comment: Published in at http://dx.doi.org/10.1214/009053607000000901 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bootstrap tests for the error distribution in linear and nonparametric regression models

    Get PDF
    In this paper we investigate several tests for the hypothesis of a parametric form of the error distribution in the common linear and nonparametric regression model, which are based on empirical processes of residuals. It is well known that tests in this context are not asymptotically distribution-free and the parametric bootstrap is applied to deal with this problem. The performance of the resulting bootstrap test is investigated from an asymptotic point of view and by means of a simulation study. The results demonstrate that even for moderate sample sizes the parametric bootstrap provides a reliable and easy accessible solution to the problem of goodness-of-fit testing of assumptions regarding the error distribution in linear and nonparametric regression models. --goodness-of-fit,residual process,parametric bootstrap,linear model,analysis of variance,M-estimation,nonparametric regression

    Optimal variance estimation without estimating the mean function

    Full text link
    We study the least squares estimator in the residual variance estimation context. We show that the mean squared differences of paired observations are asymptotically normally distributed. We further establish that, by regressing the mean squared differences of these paired observations on the squared distances between paired covariates via a simple least squares procedure, the resulting variance estimator is not only asymptotically normal and root-nn consistent, but also reaches the optimal bound in terms of estimation variance. We also demonstrate the advantage of the least squares estimator in comparison with existing methods in terms of the second order asymptotic properties.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ432 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Bayesian semiparametric multi-state models

    Get PDF
    Multi-state models provide a unified framework for the description of the evolution of discrete phenomena in continuous time. One particular example is Markov processes which can be characterised by a set of time-constant transition intensities between the states. In this paper, we will extend such parametric approaches to semiparametric models with flexible transition intensities based on Bayesian versions of penalised splines. The transition intensities will be modelled as smooth functions of time and can further be related to parametric as well as nonparametric covariate effects. Covariates with time-varying effects and frailty terms can be included in addition. Inference will be conducted either fully Bayesian (using Markov chain Monte Carlo simulation techniques) or empirically Bayesian (based on a mixed model representation). A counting process representation of semiparametric multi-state models provides the likelihood formula and also forms the basis for model validation via martingale residual processes. As an application, we will consider human sleep data with a discrete set of sleep states such as REM and non-REM phases. In this case, simple parametric approaches are inappropriate since the dynamics underlying human sleep are strongly varying throughout the night and individual specific variation has to be accounted for using covariate information and frailty terms
    corecore