1,417 research outputs found

    Empirical and Gaussian processes on Besov classes

    Full text link
    We give several conditions for pregaussianity of norm balls of Besov spaces defined over Rd\mathbb{R}^d by exploiting results in Haroske and Triebel (2005). Furthermore, complementing sufficient conditions in Nickl and P\"{o}tscher (2005), we give necessary conditions on the parameters of the Besov space to obtain the Donsker property of such balls. For certain parameter combinations Besov balls are shown to be pregaussian but not Donsker.Comment: Published at http://dx.doi.org/10.1214/074921706000000842 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the Bernstein-von Mises phenomenon for nonparametric Bayes procedures

    Full text link
    We continue the investigation of Bernstein-von Mises theorems for nonparametric Bayes procedures from [Ann. Statist. 41 (2013) 1999-2028]. We introduce multiscale spaces on which nonparametric priors and posteriors are naturally defined, and prove Bernstein-von Mises theorems for a variety of priors in the setting of Gaussian nonparametric regression and in the i.i.d. sampling model. From these results we deduce several applications where posterior-based inference coincides with efficient frequentist procedures, including Donsker- and Kolmogorov-Smirnov theorems for the random posterior cumulative distribution functions. We also show that multiscale posterior credible bands for the regression or density function are optimal frequentist confidence bands.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1246 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonparametric statistical inference for drift vector fields of multi-dimensional diffusions

    Full text link
    The problem of determining a periodic Lipschitz vector field b=(b1,,bd)b=(b_1, \dots, b_d) from an observed trajectory of the solution (Xt:0tT)(X_t: 0 \le t \le T) of the multi-dimensional stochastic differential equation \begin{equation*} dX_t = b(X_t)dt + dW_t, \quad t \geq 0, \end{equation*} where WtW_t is a standard dd-dimensional Brownian motion, is considered. Convergence rates of a penalised least squares estimator, which equals the maximum a posteriori (MAP) estimate corresponding to a high-dimensional Gaussian product prior, are derived. These results are deduced from corresponding contraction rates for the associated posterior distributions. The rates obtained are optimal up to log-factors in L2L^2-loss in any dimension, and also for supremum norm loss when d4d \le 4. Further, when d3d \le 3, nonparametric Bernstein-von Mises theorems are proved for the posterior distributions of bb. From this we deduce functional central limit theorems for the implied estimators of the invariant measure μb\mu_b. The limiting Gaussian process distributions have a covariance structure that is asymptotically optimal from an information-theoretic point of view.Comment: 55 pages, to appear in the Annals of Statistic

    Rates of contraction for posterior distributions in \bolds{L^r}-metrics, \bolds{1\le r\le\infty}

    Full text link
    The frequentist behavior of nonparametric Bayes estimates, more specifically, rates of contraction of the posterior distributions to shrinking LrL^r-norm neighborhoods, 1r1\le r\le\infty, of the unknown parameter, are studied. A theorem for nonparametric density estimation is proved under general approximation-theoretic assumptions on the prior. The result is applied to a variety of common examples, including Gaussian process, wavelet series, normal mixture and histogram priors. The rates of contraction are minimax-optimal for 1r21\le r\le2, but deteriorate as rr increases beyond 2. In the case of Gaussian nonparametric regression a Gaussian prior is devised for which the posterior contracts at the optimal rate in all LrL^r-norms, 1r1\le r\le\infty.Comment: Published in at http://dx.doi.org/10.1214/11-AOS924 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Confidence sets in sparse regression

    Full text link
    The problem of constructing confidence sets in the high-dimensional linear model with nn response variables and pp parameters, possibly pnp\ge n, is considered. Full honest adaptive inference is possible if the rate of sparse estimation does not exceed n1/4n^{-1/4}, otherwise sparse adaptive confidence sets exist only over strict subsets of the parameter spaces for which sparse estimators exist. Necessary and sufficient conditions for the existence of confidence sets that adapt to a fixed sparsity level of the parameter vector are given in terms of minimal 2\ell^2-separation conditions on the parameter space. The design conditions cover common coherence assumptions used in models for sparsity, including (possibly correlated) sub-Gaussian designs.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1170 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Efficient Simulation-Based Minimum Distance Estimation and Indirect Inference

    Get PDF
    Given a random sample from a parametric model, we show how indirect inference estimators based on appropriate nonparametric density estimators (i.e., simulation-based minimum distance estimators) can be constructed that, under mild assumptions, are asymptotically normal with variance-covarince matrix equal to the Cramer-Rao bound.Comment: Minor revision, some references and remarks adde

    A sharp adaptive confidence ball for self-similar functions

    Get PDF
    In the nonparametric Gaussian sequence space model an 2\ell^2-confidence ball CnC_n is constructed that adapts to unknown smoothness and Sobolev-norm of the infinite-dimensional parameter to be estimated. The confidence ball has exact and honest asymptotic coverage over appropriately defined `self-similar' parameter spaces. It is shown by information-theoretic methods that this `self-similarity' condition is weakest possible.Comment: To appear in Stochastic Processes and Applications (memorial issue for E. Gin\'e

    Uniform limit theorems for wavelet density estimators

    Full text link
    Let pn(y)=kα^kϕ(yk)+l=0jn1kβ^lk2l/2ψ(2lyk)p_n(y)=\sum_k\hat{\alpha}_k\phi(y-k)+\sum_{l=0}^{j_n-1}\sum_k\hat {\beta}_{lk}2^{l/2}\psi(2^ly-k) be the linear wavelet density estimator, where ϕ\phi, ψ\psi are a father and a mother wavelet (with compact support), α^k\hat{\alpha}_k, β^lk\hat{\beta}_{lk} are the empirical wavelet coefficients based on an i.i.d. sample of random variables distributed according to a density p0p_0 on R\mathbb{R}, and jnZj_n\in\mathbb{Z}, jnj_n\nearrow\infty. Several uniform limit theorems are proved: First, the almost sure rate of convergence of supyRpn(y)Epn(y)\sup_{y\in\mathbb{R}}|p_n(y)-Ep_n(y)| is obtained, and a law of the logarithm for a suitably scaled version of this quantity is established. This implies that supyRpn(y)p0(y)\sup_{y\in\mathbb{R}}|p_n(y)-p_0(y)| attains the optimal almost sure rate of convergence for estimating p0p_0, if jnj_n is suitably chosen. Second, a uniform central limit theorem as well as strong invariance principles for the distribution function of pnp_n, that is, for the stochastic processes n(FnW(s)F(s))=ns(pnp0),sR\sqrt{n}(F_n ^W(s)-F(s))=\sqrt{n}\int_{-\infty}^s(p_n-p_0),s\in\mathbb{R}, are proved; and more generally, uniform central limit theorems for the processes n(pnp0)f\sqrt{n}\int(p_n-p_0)f, fFf\in\mathcal{F}, for other Donsker classes F\mathcal{F} of interest are considered. As a statistical application, it is shown that essentially the same limit theorems can be obtained for the hard thresholding wavelet estimator introduced by Donoho et al. [Ann. Statist. 24 (1996) 508--539].Comment: Published in at http://dx.doi.org/10.1214/08-AOP447 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore