2,445 research outputs found

    Bandwidth selection for nonparametric kernel testing

    Get PDF
    We propose a sound approach to bandwidth selection in nonparametric kernel testing. The main idea is to find an Edgeworth expansion of the asymptotic distribution of the test concerned. Due to the involvement of a kernel bandwidth in the leading term of the Edgeworth expansion, we are able to establish closed-form expressions to explicitly represent the leading terms of both the size and power functions and then determine how the bandwidth should be chosen according to certain requirements for both the size and power functions. For example, when a significance level is given, we can choose the bandwidth such that the power function is maximized while the size function is controlled by the significance level. Both asymptotic theory and methodology are established. In addition, we develop an easy implementation procedure for the practical realization of the established methodology and illustrate this on two simulated examples and a real data example.Choice of bandwidth parameter; Edgeworth expansion; nonparametric kernel testing; power function; size function

    Bandwidth Selection in Nonparametric Kernel Testing

    Get PDF
    We propose a sound approach to bandwidth selection in nonparametric kernel testing. The main idea is to find an Edgeworth expansion of the asymptotic distribution of the test concerned. Due to the involvement of a kernel bandwidth in the leading term of the Edgeworth expansion, we are able to establish closed–form expressions to explicitly represent the leading terms of both the size and power functions and then determine how the bandwidth should be chosen according to certain requirements for both the size and power functions. For example, when a significance level is given, we can choose the bandwidth such that the power function is maximized while the size function is controlled by the significance level. Both asymptotic theory and methodology are established. In addition, we develop an easy implementation procedure for the practical realization of the established methodology and illustrate this on two simulated examples and a real data example.choice of bandwidth parameter, Edgeworth expansion, nonparametric kernel testing, power function, size function

    Improved kernel estimation of copulas: Weak convergence and goodness-of-fit testing

    Full text link
    We reconsider the existing kernel estimators for a copula function, as proposed in Gijbels and Mielniczuk [Comm. Statist. Theory Methods 19 (1990) 445--464], Fermanian, Radulovi\v{c} and Wegkamp [Bernoulli 10 (2004) 847--860] and Chen and Huang [Canad. J. Statist. 35 (2007) 265--282]. All of these estimators have as a drawback that they can suffer from a corner bias problem. A way to deal with this is to impose rather stringent conditions on the copula, outruling as such many classical families of copulas. In this paper, we propose improved estimators that take care of the typical corner bias problem. For Gijbels and Mielniczuk [Comm. Statist. Theory Methods 19 (1990) 445--464] and Chen and Huang [Canad. J. Statist. 35 (2007) 265--282], the improvement involves shrinking the bandwidth with an appropriate functional factor; for Fermanian, Radulovi\v{c} and Wegkamp [Bernoulli 10 (2004) 847--860], this is done by using a transformation. The theoretical contribution of the paper is a weak convergence result for the three improved estimators under conditions that are met for most copula families. We also discuss the choice of bandwidth parameters, theoretically and practically, and illustrate the finite-sample behaviour of the estimators in a simulation study. The improved estimators are applied to goodness-of-fit testing for copulas.Comment: Published in at http://dx.doi.org/10.1214/08-AOS666 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Robust Estimation of Mean and Dispersion Functions in Extended Generalized Additive Models

    Get PDF
    Generalized Linear Models are a widely used method to obtain parametric es- timates for the mean function. They have been further extended to allow the re- lationship between the mean function and the covariates to be more flexible via Generalized Additive Models. However the fixed variance structure can in many cases be too restrictive. The Extended Quasi-Likelihood (EQL) framework allows for estimation of both the mean and the dispersion/variance as functions of covari- ates. As for other maximum likelihood methods though, EQL estimates are not resistant to outliers: we need methods to obtain robust estimates for both the mean and the dispersion function. In this paper we obtain functional estimates for the mean and the dispersion that are both robust and smooth. The performance of the proposed method is illustrated via a simulation study and some real data examples.dispersion;generalized additive modelling;mean regression function;quasilikelihood;M-estimation;P-splines;robust estimation

    Nonparametric estimation of mean and dispersion functions in extended generalized linear models.

    Get PDF
    In this paper the interest is in regression analysis for data that show possibly overdispersion or underdispersion. The starting point for modeling are generalized linear models in which we no longer admit a linear form for the mean regression function, but allow it to be any smooth function of the covariate(s). In view of analyzing overdispersed or underdispersed data, we additionally bring in an unknown dispersion function. The mean regression function and the dispersion function are then estimated using P-splines with difference type of penalty to prevent from overfitting. We discuss two approaches: one based on an extended quasi-likelihood idea and one based on a pseudo-likelihood approach. The choices of smoothing parameters and implementation issues are discussed. The performance of the estimation method is investigated via simulations and its use is illustrated on several data, including continuous data, counts and proportions.Double exponential family; Extended quasi-likelihood; Modeling; Overdispersion; Pseudo likelihood; P-splines; Regression; Variance estimation; Underdispersion;

    Robust estimation of mean and dispersion functions in extended generalized additive models.

    Get PDF
    Generalized Linear Models are a widely used method to obtain parametric estimates for the mean function. They have been further extended to allow the relationship between the mean function and the covariates to be more flexible via Generalized Additive Models. However the fixed variance structure can in many cases be too restrictive. The Extended Quasi-Likelihood (EQL) framework allows for estimation of both the mean and the dispersion/variance as functions of covariates. As for other maximum likelihood methods though, EQL estimates are not resistant to outliers: we need methods to obtain robust estimates for both the mean and the dispersion function. In this paper we obtain functional estimates for the mean and the dispersion that are both robust and smooth. The performance of the proposed method is illustrated via a simulation study and some real data examples.Dispersion; Generalized additive modelling; Mean regression function; M-estimation; P-splines; Robust estimation;

    Robust Forecasting of Non-Stationary Time Series

    Get PDF
    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable forecasts in the presence of outliers, non-linearity, and heteroscedasticity. In the absence of outliers, the forecasts are only slightly less precise than those based on a localized Least Squares estimator. An additional advantage of the MM-estimator is that it provides a robust estimate of the local variability of the time series.Heteroscedasticity;Non-parametric regression;Prediction;Outliers;Robustness

    Penalized wavelet monotone regression

    Get PDF
    In this paper we focus on nonparametric estimation of a constrained regression function using penalized wavelet regression techniques. This results into a convex op- timization problem under linear constraints. Necessary and sufficient conditions for existence of a unique solution are discussed. The estimator is easily obtained via the dual formulation of the optimization problem. In particular we investigate a penalized wavelet monotone regression estimator. We establish the rate of convergence of this estimator, and illustrate its finite sample performance via a simulation study. We also compare its performance with that of a recently proposed constrained estimator. An illustration to some real data is given
    corecore