56,137 research outputs found

    A sharp concentration inequality with applications

    Get PDF
    We present a new general concentration-of-measure inequality and illustrate its power by applications in random combinatorics. The results find direct applications in some problems of learning theory.Concentration of measure, Vapnik-Chervonenkis dimension, logarithmic Sobolev inequalities, longest monotone subsequence, model selection

    Non-abelian Littlewood-Offord inequalities

    Full text link
    In 1943, Littlewood and Offord proved the first anti-concentration result for sums of independent random variables. Their result has since then been strengthened and generalized by generations of researchers, with applications in several areas of mathematics. In this paper, we present the first non-abelian analogue of Littlewood-Offord result, a sharp anti-concentration inequality for products of independent random variables.Comment: 14 pages Second version. Dependence of the upper bound on the matrix size in the main results has been remove

    Matrix concentration inequalities with dependent summands and sharp leading-order terms

    Full text link
    We establish sharp concentration inequalities for sums of dependent random matrices. Our results concern two models. First, a model where summands are generated by a ψ\psi-mixing Markov chain. Second, a model where summands are expressed as deterministic matrices multiplied by scalar random variables. In both models, the leading-order term is provided by free probability theory. This leading-order term is often asymptotically sharp and, in particular, does not suffer from the logarithmic dimensional dependence which is present in previous results such as the matrix Khintchine inequality. A key challenge in the proof is that techniques based on classical cumulants, which can be used in a setting with independent summands, fail to produce efficient estimates in the Markovian model. Our approach is instead based on Boolean cumulants and a change-of-measure argument. We discuss applications concerning community detection in Markov chains, random matrices with heavy-tailed entries, and the analysis of random graphs with dependent edges.Comment: 69 pages, 4 figure

    Bernstein type's concentration inequalities for symmetric Markov processes

    Get PDF
    Using the method of transportation-information inequality introduced in \cite{GLWY}, we establish Bernstein type's concentration inequalities for empirical means 1t∫0tg(Xs)ds\frac 1t \int_0^t g(X_s)ds where gg is a unbounded observable of the symmetric Markov process (Xt)(X_t). Three approaches are proposed : functional inequalities approach ; Lyapunov function method ; and an approach through the Lipschitzian norm of the solution to the Poisson equation. Several applications and examples are studied

    Optimal Concentration of Information Content For Log-Concave Densities

    Full text link
    An elementary proof is provided of sharp bounds for the varentropy of random vectors with log-concave densities, as well as for deviations of the information content from its mean. These bounds significantly improve on the bounds obtained by Bobkov and Madiman ({\it Ann. Probab.}, 39(4):1528--1543, 2011).Comment: 15 pages. Changes in v2: Remark 2.5 (due to C. Saroglou) added with more general sufficient conditions for equality in Theorem 2.3. Also some minor corrections and added reference
    • 

    corecore