451 research outputs found

    Additive/multiplicative free subordination property and limiting eigenvectors of spiked additive deformations of Wigner matrices and spiked sample covariance matrices

    Full text link
    When some eigenvalues of a spiked multiplicative resp. additive deformation model of a Hermitian Wigner matrix resp. a sample covariance matrix separate from the bulk, we study how the corresponding eigenvectors project onto those of the perturbation. We point out that the inverse of the subordination function relative to the free additive resp. multiplicative convolution plays an important part in the asymptotic behavior

    Tail bounds for all eigenvalues of a sum of random matrices

    Get PDF
    This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in "User-friendly tail bounds for sums of random matrices" (arXiv:1004.4389v6) that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogues of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, on the order of (K^2*r*log(p))/eps^2 samples, where K is the condition number of an optimal rank-r approximation to C, are sufficient to ensure that the dominant r eigenvalues of the covariance matrix of a N(0, C) random vector are estimated to within a factor of 1+-eps with high probability.Comment: 20 pages, 1 figure, see also arXiv:1004.4389v

    Location of the spectrum of Kronecker random matrices

    Full text link
    For a general class of large non-Hermitian random block matrices X\mathbf{X} we prove that there are no eigenvalues away from a deterministic set with very high probability. This set is obtained from the Dyson equation of the Hermitization of X\mathbf{X} as the self-consistent approximation of the pseudospectrum. We demonstrate that the analysis of the matrix Dyson equation from [arXiv:1604.08188v4] offers a unified treatment of many structured matrix ensembles.Comment: 33 pages, 4 figures. Some assumptions in Section 3.1 and 3.2 relaxed. Some typos corrected and references update

    The norm of polynomials in large random and deterministic matrices

    Get PDF
    Let X_N= (X_1^(N), ..., X_p^(N)) be a family of N-by-N independent, normalized random matrices from the Gaussian Unitary Ensemble. We state sufficient conditions on matrices Y_N =(Y_1^(N), ..., Y_q^(N)), possibly random but independent of X_N, for which the operator norm of P(X_N, Y_N, Y_N^*) converges almost surely for all polynomials P. Limits are described by operator norms of objects from free probability theory. Taking advantage of the choice of the matrices Y_N and of the polynomials P we get for a large class of matrices the "no eigenvalues outside a neighborhood of the limiting spectrum" phenomena. We give examples of diagonal matrices Y_N for which the convergence holds. Convergence of the operator norm is shown to hold for block matrices, even with rectangular Gaussian blocks, a situation including non-white Wishart matrices and some matrices encountered in MIMO systems.Comment: 41 pages, with an appendix by D. Shlyakhtenk
    • …
    corecore