148 research outputs found

    Confidence bands for densities, logarithmic point of view

    Full text link
    Let ff be a probability density and CC be an interval on which ff is bounded away from zero. By establishing the limiting distribution of the uniform error of the kernel estimates fnf_n of ff, Bickel and Rosenblatt (1973) provide confidence bands BnB_n for ff on CC with asymptotic level 1−α∈]0,1[1-\alpha\in]0,1[. Each of the confidence intervals whose union gives BnB_n has an asymptotic level equal to one; pointwise moderate deviations principles allow to prove that all these intervals share the same logarithmic asymptotic level. Now, as soon as both pointwise and uniform moderate deviations principles for fnf_n exist, they share the same asymptotics. Taking this observation as a starting point, we present a new approach for the construction of confidence bands for ff, based on the use of moderate deviations principles. The advantages of this approach are the following: (i) it enables to construct confidence bands, which have the same width (or even a smaller width) as the confidence bands provided by Bickel and Rosenblatt (1973), but which have a better aymptotic level; (ii) any confidence band constructed in that way shares the same logarithmic asymptotic level as all the confidence intervals, which make up this confidence band; (iii) it allows to deal with all the dimensions in the same way; (iv) it enables to sort out the problem of providing confidence bands for ff on compact sets on which ff vanishes (or on all \bb R^d), by introducing a truncating operation

    Convergence rate and averaging of nonlinear two-time-scale stochastic approximation algorithms

    Full text link
    The first aim of this paper is to establish the weak convergence rate of nonlinear two-time-scale stochastic approximation algorithms. Its second aim is to introduce the averaging principle in the context of two-time-scale stochastic approximation algorithms. We first define the notion of asymptotic efficiency in this framework, then introduce the averaged two-time-scale stochastic approximation algorithm, and finally establish its weak convergence rate. We show, in particular, that both components of the averaged two-time-scale stochastic approximation algorithm simultaneously converge at the optimal rate n\sqrt{n}.Comment: Published at http://dx.doi.org/10.1214/105051606000000448 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A companion for the Kiefer--Wolfowitz--Blum stochastic approximation algorithm

    Full text link
    A stochastic algorithm for the recursive approximation of the location θ\theta of a maximum of a regression function was introduced by Kiefer and Wolfowitz [Ann. Math. Statist. 23 (1952) 462--466] in the univariate framework, and by Blum [Ann. Math. Statist. 25 (1954) 737--744] in the multivariate case. The aim of this paper is to provide a companion algorithm to the Kiefer--Wolfowitz--Blum algorithm, which allows one to simultaneously recursively approximate the size μ\mu of the maximum of the regression function. A precise study of the joint weak convergence rate of both algorithms is given; it turns out that, unlike the location of the maximum, the size of the maximum can be approximated by an algorithm which converges at the parametric rate. Moreover, averaging leads to an asymptotically efficient algorithm for the approximation of the couple (θ,μ)(\theta,\mu).Comment: Published in at http://dx.doi.org/10.1214/009053606000001451 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Limit of normalized quadrangulations: The Brownian map

    Full text link
    Consider qnq_n a random pointed quadrangulation chosen equally likely among the pointed quadrangulations with nn faces. In this paper we show that, when nn goes to +∞+\infty, qnq_n suitably normalized converges weakly in a certain sense to a random limit object, which is continuous and compact, and that we name the Brownian map. The same result is shown for a model of rooted quadrangulations and for some models of rooted quadrangulations with random edge lengths. A metric space of rooted (resp. pointed) abstract maps that contains the model of discrete rooted (resp. pointed) quadrangulations and the model of the Brownian map is defined. The weak convergences hold in these metric spaces.Comment: Published at http://dx.doi.org/10.1214/009117906000000557 in the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Large and moderate deviations principles for recursive kernel estimators of a multivariate density and its partial derivatives

    Full text link
    In this paper we prove large and moderate deviations principles for the recursive kernel estimator of a probability density function and its partial derivatives. Unlike the density estimator, the derivatives estimators exhibit a quadratic behavior not only for the moderate deviations scale but also for the large deviations one. We provide results both for the pointwise and the uniform deviations.Comment: 26 page

    The stochastic approximation method for the estimation of a multivariate probability density

    Full text link
    We apply the stochastic approximation method to construct a large class of recursive kernel estimators of a probability density, including the one introduced by Hall and Patil (1994). We study the properties of these estimators and compare them with Rosenblatt's nonrecursive estimator. It turns out that, for pointwise estimation, it is preferable to use the nonrecursive Rosenblatt's kernel estimator rather than any recursive estimator. A contrario, for estimation by confidence intervals, it is better to use a recursive estimator rather than Rosenblatt's estimator.Comment: 28 page

    Large and moderate deviations principles for kernel estimators of the multivariate regression

    Full text link
    In this paper, we prove large deviations principle for the Nadaraya-Watson estimator and for the semi-recursive kernel estimator of the regression in the multidimensional case. Under suitable conditions, we show that the rate function is a good rate function. We thus generalize the results already obtained in the unidimensional case for the Nadaraya-Watson estimator. Moreover, we give a moderate deviations principle for these two estimators. It turns out that the rate function obtained in the moderate deviations principle for the semi-recursive estimator is larger than the one obtained for the Nadaraya-Watson estimator.Comment: 31 page

    Large and Moderate Deviations Principles for Recursive Kernel Estimator of a Multivariate Density and its Partial Derivatives

    Get PDF
    2000 Mathematics Subject Classification: 62G07, 60F10.In this paper we prove large and moderate deviations principles for the recursive kernel estimator of a probability density function and its partial derivatives. Unlike the density estimator, the derivatives estimators exhibit a quadratic behaviour not only for the moderate deviations scale but also for the large deviations one. We provide results both for the pointwise and the uniform deviations
    • …
    corecore