56,137 research outputs found
A sharp concentration inequality with applications
We present a new general concentration-of-measure inequality and illustrate its power by applications in random combinatorics. The results find direct applications in some problems of learning theory.Concentration of measure, Vapnik-Chervonenkis dimension, logarithmic Sobolev inequalities, longest monotone subsequence, model selection
Non-abelian Littlewood-Offord inequalities
In 1943, Littlewood and Offord proved the first anti-concentration result for
sums of independent random variables. Their result has since then been
strengthened and generalized by generations of researchers, with applications
in several areas of mathematics.
In this paper, we present the first non-abelian analogue of Littlewood-Offord
result, a sharp anti-concentration inequality for products of independent
random variables.Comment: 14 pages Second version. Dependence of the upper bound on the matrix
size in the main results has been remove
Matrix concentration inequalities with dependent summands and sharp leading-order terms
We establish sharp concentration inequalities for sums of dependent random
matrices. Our results concern two models. First, a model where summands are
generated by a -mixing Markov chain. Second, a model where summands are
expressed as deterministic matrices multiplied by scalar random variables. In
both models, the leading-order term is provided by free probability theory.
This leading-order term is often asymptotically sharp and, in particular, does
not suffer from the logarithmic dimensional dependence which is present in
previous results such as the matrix Khintchine inequality.
A key challenge in the proof is that techniques based on classical cumulants,
which can be used in a setting with independent summands, fail to produce
efficient estimates in the Markovian model. Our approach is instead based on
Boolean cumulants and a change-of-measure argument.
We discuss applications concerning community detection in Markov chains,
random matrices with heavy-tailed entries, and the analysis of random graphs
with dependent edges.Comment: 69 pages, 4 figure
Bernstein type's concentration inequalities for symmetric Markov processes
Using the method of transportation-information inequality introduced in
\cite{GLWY}, we establish Bernstein type's concentration inequalities for
empirical means where is a unbounded
observable of the symmetric Markov process . Three approaches are
proposed : functional inequalities approach ; Lyapunov function method ; and an
approach through the Lipschitzian norm of the solution to the Poisson equation.
Several applications and examples are studied
Optimal Concentration of Information Content For Log-Concave Densities
An elementary proof is provided of sharp bounds for the varentropy of random
vectors with log-concave densities, as well as for deviations of the
information content from its mean. These bounds significantly improve on the
bounds obtained by Bobkov and Madiman ({\it Ann. Probab.}, 39(4):1528--1543,
2011).Comment: 15 pages. Changes in v2: Remark 2.5 (due to C. Saroglou) added with
more general sufficient conditions for equality in Theorem 2.3. Also some
minor corrections and added reference
- âŠ