89 research outputs found

    Estimating invariant laws of linear processes by U-statistics

    Full text link
    Suppose we observe an invertible linear process with independent mean-zero innovations and with coefficients depending on a finite-dimensional parameter, and we want to estimate the expectation of some function under the stationary distribution of the process. The usual estimator would be the empirical estimator. It can be improved using the fact that the innovations are centered. We construct an even better estimator using the representation of the observations as infinite-order moving averages of the innovations. Then the expectation of the function under the stationary distribution can be written as the expectation under the distribution of an infinite series in terms of the innovations, and it can be estimated by a U-statistic of increasing order (also called an ``infinite-order U-statistic'') in terms of the estimated innovations. The estimator can be further improved using the fact that the innovations are centered. This improved estimator is optimal if the coefficients of the linear process are estimated optimally

    Uniformly root-NN consistent density estimators for weakly dependent invertible linear processes

    Full text link
    Convergence rates of kernel density estimators for stationary time series are well studied. For invertible linear processes, we construct a new density estimator that converges, in the supremum norm, at the better, parametric, rate n1/2n^{-1/2}. Our estimator is a convolution of two different residual-based kernel estimators. We obtain in particular convergence rates for such residual-based kernel estimators; these results are of independent interest.Comment: Published at http://dx.doi.org/10.1214/009053606000001352 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Uniform convergence of convolution estimators for the response density in nonparametric regression

    Full text link
    We consider a nonparametric regression model Y=r(X)+εY=r(X)+\varepsilon with a random covariate XX that is independent of the error ε\varepsilon. Then the density of the response YY is a convolution of the densities of ε\varepsilon and r(X)r(X). It can therefore be estimated by a convolution of kernel estimators for these two densities, or more generally by a local von Mises statistic. If the regression function has a nowhere vanishing derivative, then the convolution estimator converges at a parametric rate. We show that the convergence holds uniformly, and that the corresponding process obeys a functional central limit theorem in the space C0(R)C_0(\mathbb {R}) of continuous functions vanishing at infinity, endowed with the sup-norm. The estimator is not efficient. We construct an additive correction that makes it efficient.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ451 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Efficient prediction for linear and nonlinear autoregressive models

    Get PDF
    Conditional expectations given past observations in stationary time series are usually estimated directly by kernel estimators, or by plugging in kernel estimators for transition densities. We show that, for linear and nonlinear autoregressive models driven by independent innovations, appropriate smoothed and weighted von Mises statistics of residuals estimate conditional expectations at better parametric rates and are asymptotically efficient. The proof is based on a uniform stochastic expansion for smoothed and weighted von Mises processes of residuals. We consider, in particular, estimation of conditional distribution functions and of conditional quantile functions.Comment: Published at http://dx.doi.org/10.1214/009053606000000812 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Optimality of estimators for misspecified semi-Markov models

    Full text link
    Suppose we observe a geometrically ergodic semi-Markov process and have a parametric model for the transition distribution of the embedded Markov chain, for the conditional distribution of the inter-arrival times, or for both. The first two models for the process are semiparametric, and the parameters can be estimated by conditional maximum likelihood estimators. The third model for the process is parametric, and the parameter can be estimated by an unconditional maximum likelihood estimator. We determine heuristically the asymptotic distributions of these estimators and show that they are asymptotically efficient. If the parametric models are not correct, the (conditional) maximum likelihood estimators estimate the parameter that maximizes the Kullback--Leibler information. We show that they remain asymptotically efficient in a nonparametric sense.Comment: To appear in a Special Volume of Stochastics: An International Journal of Probability and Stochastic Processes (http://www.informaworld.com/openurl?genre=journal%26issn=1744-2508) edited by N.H. Bingham and I.V. Evstigneev which will be reprinted as Volume 57 of the IMS Lecture Notes Monograph Series (http://imstat.org/publications/lecnotes.htm

    Asymptotically Optimal Estimation in Misspecified Time Series Models

    Get PDF
    A concept of asymptotically efficient estimation is presented whena misspecified parametric time series model is fitted to a stationary process.Efficiency of several minimum distance estimates is proved and the behavior ofthe Gaussian maximum likelihood estimate is studied. Furthermore, the behaviorof estimates that minimize the h-step prediction error is discussed briefly. The paper answers to some extent the question what happens when a misspecifiedmodel is fitted to time series data and one acts as if the model were true

    Estimation in Nonparametric Regression with Nonregular Errors

    Get PDF
    Abstract For sufficiently nonregular distributions with bounded support, the extreme observations converge to the boundary points at a faster rate than the square root of the sample size. In a nonparametric regression model with such a nonregular error distribution, this fact can be used to construct an estimator for the regression function that converges at a faster rate than the NadarayaWatson estimator. We explain this in the simplest case, review corresponding results from boundary estimation that are applicable here, and discuss possible improvements in parametric and semiparametric models
    corecore