1,304 research outputs found
Some nonasymptotic results on resampling in high dimension, I: Confidence regions, II: Multiple tests
We study generalized bootstrap confidence regions for the mean of a random
vector whose coordinates have an unknown dependency structure. The random
vector is supposed to be either Gaussian or to have a symmetric and bounded
distribution. The dimensionality of the vector can possibly be much larger than
the number of observations and we focus on a nonasymptotic control of the
confidence level, following ideas inspired by recent results in learning
theory. We consider two approaches, the first based on a concentration
principle (valid for a large class of resampling weights) and the second on a
resampled quantile, specifically using Rademacher weights. Several intermediate
results established in the approach based on concentration principles are of
interest in their own right. We also discuss the question of accuracy when
using Monte Carlo approximations of the resampled quantities.Comment: Published in at http://dx.doi.org/10.1214/08-AOS667;
http://dx.doi.org/10.1214/08-AOS668 the Annals of Statistics
(http://www.imstat.org/aos/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Nonasymptotic bounds on the estimation error of MCMC algorithms
We address the problem of upper bounding the mean square error of MCMC
estimators. Our analysis is nonasymptotic. We first establish a general result
valid for essentially all ergodic Markov chains encountered in Bayesian
computation and a possibly unbounded target function . The bound is sharp in
the sense that the leading term is exactly ,
where is the CLT asymptotic variance. Next, we
proceed to specific additional assumptions and give explicit computable bounds
for geometrically and polynomially ergodic Markov chains under quantitative
drift conditions. As a corollary, we provide results on confidence estimation.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ442 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm). arXiv admin
note: text overlap with arXiv:0907.491
Tail bounds for all eigenvalues of a sum of random matrices
This work introduces the minimax Laplace transform method, a modification of
the cumulant-based matrix Laplace transform method developed in "User-friendly
tail bounds for sums of random matrices" (arXiv:1004.4389v6) that yields both
upper and lower bounds on each eigenvalue of a sum of random self-adjoint
matrices. This machinery is used to derive eigenvalue analogues of the
classical Chernoff, Bennett, and Bernstein bounds.
Two examples demonstrate the efficacy of the minimax Laplace transform. The
first concerns the effects of column sparsification on the spectrum of a matrix
with orthonormal rows. Here, the behavior of the singular values can be
described in terms of coherence-like quantities. The second example addresses
the question of relative accuracy in the estimation of eigenvalues of the
covariance matrix of a random process. Standard results on the convergence of
sample covariance matrices provide bounds on the number of samples needed to
obtain relative accuracy in the spectral norm, but these results only guarantee
relative accuracy in the estimate of the maximum eigenvalue. The minimax
Laplace transform argument establishes that if the lowest eigenvalues decay
sufficiently fast, on the order of (K^2*r*log(p))/eps^2 samples, where K is the
condition number of an optimal rank-r approximation to C, are sufficient to
ensure that the dominant r eigenvalues of the covariance matrix of a N(0, C)
random vector are estimated to within a factor of 1+-eps with high probability.Comment: 20 pages, 1 figure, see also arXiv:1004.4389v
Optimal Concentration of Information Content For Log-Concave Densities
An elementary proof is provided of sharp bounds for the varentropy of random
vectors with log-concave densities, as well as for deviations of the
information content from its mean. These bounds significantly improve on the
bounds obtained by Bobkov and Madiman ({\it Ann. Probab.}, 39(4):1528--1543,
2011).Comment: 15 pages. Changes in v2: Remark 2.5 (due to C. Saroglou) added with
more general sufficient conditions for equality in Theorem 2.3. Also some
minor corrections and added reference
- …