474 research outputs found

    Concentration inequalities for random tensors

    Get PDF
    We show how to extend several basic concentration inequalities for simple random tensors X=x1xdX = x_1 \otimes \cdots \otimes x_d where all xkx_k are independent random vectors in Rn\mathbb{R}^n with independent coefficients. The new results have optimal dependence on the dimension nn and the degree dd. As an application, we show that random tensors are well conditioned: (1o(1))nd(1-o(1)) n^d independent copies of the simple random tensor XRndX \in \mathbb{R}^{n^d} are far from being linearly dependent with high probability. We prove this fact for any degree d=o(n/logn)d = o(\sqrt{n/\log n}) and conjecture that it is true for any d=O(n)d = O(n).Comment: A few more typos were correcte

    A note on the Hanson-Wright inequality for random vectors with dependencies

    Full text link
    We prove that quadratic forms in isotropic random vectors XX in Rn\mathbb{R}^n, possessing the convex concentration property with constant KK, satisfy the Hanson-Wright inequality with constant CKCK, where CC is an absolute constant, thus eliminating the logarithmic (in the dimension) factors in a recent estimate by Vu and Wang. We also show that the concentration inequality for all Lipschitz functions implies a uniform version of the Hanson-Wright inequality for suprema of quadratic forms (in the spirit of the inequalities by Borell, Arcones-Gin\'e and Ledoux-Talagrand). Previous results of this type relied on stronger isoperimetric properties of XX and in some cases provided an upper bound on the deviations rather than a concentration inequality. In the last part of the paper we show that the uniform version of the Hanson-Wright inequality for Gaussian vectors can be used to recover a recent concentration inequality for empirical estimators of the covariance operator of BB-valued Gaussian variables due to Koltchinskii and Lounici
    corecore