1,417 research outputs found
Empirical and Gaussian processes on Besov classes
We give several conditions for pregaussianity of norm balls of Besov spaces
defined over by exploiting results in Haroske and Triebel
(2005). Furthermore, complementing sufficient conditions in Nickl and
P\"{o}tscher (2005), we give necessary conditions on the parameters of the
Besov space to obtain the Donsker property of such balls. For certain parameter
combinations Besov balls are shown to be pregaussian but not Donsker.Comment: Published at http://dx.doi.org/10.1214/074921706000000842 in the IMS
Lecture Notes Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
On the Bernstein-von Mises phenomenon for nonparametric Bayes procedures
We continue the investigation of Bernstein-von Mises theorems for
nonparametric Bayes procedures from [Ann. Statist. 41 (2013) 1999-2028]. We
introduce multiscale spaces on which nonparametric priors and posteriors are
naturally defined, and prove Bernstein-von Mises theorems for a variety of
priors in the setting of Gaussian nonparametric regression and in the i.i.d.
sampling model. From these results we deduce several applications where
posterior-based inference coincides with efficient frequentist procedures,
including Donsker- and Kolmogorov-Smirnov theorems for the random posterior
cumulative distribution functions. We also show that multiscale posterior
credible bands for the regression or density function are optimal frequentist
confidence bands.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1246 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Nonparametric statistical inference for drift vector fields of multi-dimensional diffusions
The problem of determining a periodic Lipschitz vector field from an observed trajectory of the solution of the
multi-dimensional stochastic differential equation \begin{equation*} dX_t =
b(X_t)dt + dW_t, \quad t \geq 0, \end{equation*} where is a standard
-dimensional Brownian motion, is considered. Convergence rates of a
penalised least squares estimator, which equals the maximum a posteriori (MAP)
estimate corresponding to a high-dimensional Gaussian product prior, are
derived. These results are deduced from corresponding contraction rates for the
associated posterior distributions. The rates obtained are optimal up to
log-factors in -loss in any dimension, and also for supremum norm loss
when . Further, when , nonparametric Bernstein-von Mises
theorems are proved for the posterior distributions of . From this we deduce
functional central limit theorems for the implied estimators of the invariant
measure . The limiting Gaussian process distributions have a covariance
structure that is asymptotically optimal from an information-theoretic point of
view.Comment: 55 pages, to appear in the Annals of Statistic
Rates of contraction for posterior distributions in \bolds{L^r}-metrics, \bolds{1\le r\le\infty}
The frequentist behavior of nonparametric Bayes estimates, more specifically,
rates of contraction of the posterior distributions to shrinking -norm
neighborhoods, , of the unknown parameter, are studied. A
theorem for nonparametric density estimation is proved under general
approximation-theoretic assumptions on the prior. The result is applied to a
variety of common examples, including Gaussian process, wavelet series, normal
mixture and histogram priors. The rates of contraction are minimax-optimal for
, but deteriorate as increases beyond 2. In the case of
Gaussian nonparametric regression a Gaussian prior is devised for which the
posterior contracts at the optimal rate in all -norms, .Comment: Published in at http://dx.doi.org/10.1214/11-AOS924 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Confidence sets in sparse regression
The problem of constructing confidence sets in the high-dimensional linear
model with response variables and parameters, possibly , is
considered. Full honest adaptive inference is possible if the rate of sparse
estimation does not exceed , otherwise sparse adaptive confidence
sets exist only over strict subsets of the parameter spaces for which sparse
estimators exist. Necessary and sufficient conditions for the existence of
confidence sets that adapt to a fixed sparsity level of the parameter vector
are given in terms of minimal -separation conditions on the parameter
space. The design conditions cover common coherence assumptions used in models
for sparsity, including (possibly correlated) sub-Gaussian designs.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1170 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Efficient Simulation-Based Minimum Distance Estimation and Indirect Inference
Given a random sample from a parametric model, we show how indirect inference
estimators based on appropriate nonparametric density estimators (i.e.,
simulation-based minimum distance estimators) can be constructed that, under
mild assumptions, are asymptotically normal with variance-covarince matrix
equal to the Cramer-Rao bound.Comment: Minor revision, some references and remarks adde
A sharp adaptive confidence ball for self-similar functions
In the nonparametric Gaussian sequence space model an -confidence
ball is constructed that adapts to unknown smoothness and Sobolev-norm of
the infinite-dimensional parameter to be estimated. The confidence ball has
exact and honest asymptotic coverage over appropriately defined `self-similar'
parameter spaces. It is shown by information-theoretic methods that this
`self-similarity' condition is weakest possible.Comment: To appear in Stochastic Processes and Applications (memorial issue
for E. Gin\'e
Uniform limit theorems for wavelet density estimators
Let be the linear wavelet density estimator, where
, are a father and a mother wavelet (with compact support),
, are the empirical wavelet coefficients
based on an i.i.d. sample of random variables distributed according to a
density on , and , .
Several uniform limit theorems are proved: First, the almost sure rate of
convergence of is obtained, and a law
of the logarithm for a suitably scaled version of this quantity is established.
This implies that attains the optimal
almost sure rate of convergence for estimating , if is suitably
chosen. Second, a uniform central limit theorem as well as strong invariance
principles for the distribution function of , that is, for the stochastic
processes , are proved; and
more generally, uniform central limit theorems for the processes
, , for other Donsker classes
of interest are considered. As a statistical application, it is
shown that essentially the same limit theorems can be obtained for the hard
thresholding wavelet estimator introduced by Donoho et al. [Ann. Statist. 24
(1996) 508--539].Comment: Published in at http://dx.doi.org/10.1214/08-AOP447 the Annals of
Probability (http://www.imstat.org/aop/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …
