132 research outputs found
A Robbins-Monro procedure for estimation in semiparametric regression models
This paper is devoted to the parametric estimation of a shift together with
the nonparametric estimation of a regression function in a semiparametric
regression model. We implement a very efficient and easy to handle
Robbins-Monro procedure. On the one hand, we propose a stochastic algorithm
similar to that of Robbins-Monro in order to estimate the shift parameter. A
preliminary evaluation of the regression function is not necessary to estimate
the shift parameter. On the other hand, we make use of a recursive
Nadaraya-Watson estimator for the estimation of the regression function. This
kernel estimator takes into account the previous estimation of the shift
parameter. We establish the almost sure convergence for both Robbins-Monro and
Nadaraya--Watson estimators. The asymptotic normality of our estimates is also
provided. Finally, we illustrate our semiparametric estimation procedure on
simulated and real data.Comment: Published in at http://dx.doi.org/10.1214/12-AOS969 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
A sharp analysis on the asymptotic behavior of the Durbin-Watson statistic for the first-order autoregressive process
The purpose of this paper is to provide a sharp analysis on the asymptotic
behavior of the Durbin-Watson statistic. We focus our attention on the
first-order autoregressive process where the driven noise is also given by a
first-order autoregressive process. We establish the almost sure convergence
and the asymptotic normality for both the least squares estimator of the
unknown parameter of the autoregressive process as well as for the serial
correlation estimator associated to the driven noise. In addition, the almost
sure rates of convergence of our estimates are also provided. It allows us to
establish the almost sure convergence and the asymptotic normality for the
Durbin-Watson statistic. Finally, we propose a new bilateral statistical test
for residual autocorrelation
Asymptotic results for empirical measures of weighted sums of independent random variables
We prove that if a rectangular matrix with uniformly small entries and
approximately orthogonal rows is applied to the independent standardized random
variables with uniformly bounded third moments, then the empirical CDF of the
resulting partial sums converges to the normal CDF with probability one. This
implies almost sure convergence of empirical periodograms, almost sure
convergence of spectra of circulant and reverse circulant matrices, and almost
sure convergence of the CDF's generated from independent random variables by
independent random orthogonal matrices.
For special trigonometric matrices, the speed of the almost sure convergence
is described by the normal approximation and by the large deviation principle
A Durbin-Watson serial correlation test for ARX processes via excited adaptive tracking
We propose a new statistical test for the residual autocorrelation in ARX
adaptive tracking. The introduction of a persistent excitation in the adaptive
tracking control allows us to build a bilateral statistical test based on the
well-known Durbin-Watson statistic. We establish the almost sure convergence
and the asymptotic normality for the Durbin-Watson statistic leading to a
powerful serial correlation test. Numerical experiments illustrate the good
performances of our statistical test procedure
Almost sure central limit theorems on the Wiener space
In this paper, we study almost sure central limit theorems for sequences of
functionals of general Gaussian fields. We apply our result to non-linear
functions of stationary Gaussian sequences. We obtain almost sure central limit
theorems for these non-linear functions when they converge in law to a normal
distribution.Comment: Title changed. Major changes: results improved. 24 page
Almost Sure Stabilization for Adaptive Controls of Regime-switching LQ Systems with A Hidden Markov Chain
This work is devoted to the almost sure stabilization of adaptive control
systems that involve an unknown Markov chain. The control system displays
continuous dynamics represented by differential equations and discrete events
given by a hidden Markov chain. Different from previous work on stabilization
of adaptive controlled systems with a hidden Markov chain, where average
criteria were considered, this work focuses on the almost sure stabilization or
sample path stabilization of the underlying processes. Under simple conditions,
it is shown that as long as the feedback controls have linear growth in the
continuous component, the resulting process is regular. Moreover, by
appropriate choice of the Lyapunov functions, it is shown that the adaptive
system is stabilizable almost surely. As a by-product, it is also established
that the controlled process is positive recurrent
- …