12 research outputs found

    Wigner random matrices with non-symmetrically distributed entries

    Full text link
    We show that the spectral radius of an N×NN\times N random symmetric matrix with i.i.d. bounded centered but non-symmetrically distributed entries is bounded from above by 2 \*\sigma + o(N^{-6/11+\epsilon}), where σ2\sigma^2 is the variance of the matrix entries and ϵ\epsilon is an arbitrary small positive number. Our bound improves the earlier results by Z.F\"{u}redi and J.Koml\'{o}s (1981), and the recent bound obtained by Van Vu (2005).Comment: to appear in the Special Issue on Random Matrices of the Journal of Statistical Physic

    On the lower bound of the spectral norm of symmetric random matrices with independent entries

    Full text link
    We show that the spectral radius of an N×NN\times N random symmetric matrix with i.i.d. bounded centered but non-symmetrically distributed entries is bounded from below by 2 \*\sigma - o(N^{-6/11+\epsilon}), where σ2\sigma^2 is the variance of the matrix entries and ϵ\epsilon is an arbitrary small positive number. Combining with our previous result from [7], this proves that for any ϵ>0,\epsilon >0, one has \|A_N\| =2 \*\sigma + o(N^{-6/11+\epsilon}) with probability going to 1 as $N \to \infty.

    Universality results for largest eigenvalues of some sample covariance matrix ensembles

    Full text link
    For sample covariance matrices with iid entries with sub-Gaussian tails, when both the number of samples and the number of variables become large and the ratio approaches to one, it is a well-known result of A. Soshnikov that the limiting distribution of the largest eigenvalue is same as the of Gaussian samples. In this paper, we extend this result to two cases. The first case is when the ratio approaches to an arbitrary finite value. The second case is when the ratio becomes infinity or arbitrarily small.Comment: 3 figures 47 pages Simulations have been included, a mistake in the computation of the variance has been corrected (Section 2.5

    Poisson convergence for the largest eigenvalues of Heavy Tailed Random Matrices

    Get PDF
    We study the statistics of the largest eigenvalues of real symmetric and sample covariance matrices when the entries are heavy tailed. Extending the result obtained by Soshnikov in \cite{Sos1}, we prove that, in the absence of the fourth moment, the top eigenvalues behave, in the limit, as the largest entries of the matrix.Comment: 22 pages, to appear in Annales de l'Institut Henri Poincar

    On the lower bound of the spectral norm of symmetric random matrices with independent entries

    No full text
    International audienc

    Robustness of Community Detection to Random Geometric Perturbations

    No full text
    NeurIPS-2020International audienceWe consider the stochastic block model where connection between vertices is perturbed by some latent (and unobserved) random geometric graph. The objective is to prove that spectral methods are robust to this type of noise, even if they are agnostic to the presence (or not) of the random graph. We provide explicit regimes where the second eigenvector of the adjacency matrix is highly correlated to the true community vector (and therefore when weak/exact recovery is possible). This is possible thanks to a detailed analysis of the spectrum of the latent random graph, of its own interest

    Eigenvectors of some large sample covariance matrices ensembles

    Full text link
    We consider sample covariance matrices constructed from real or complex i.i.d. variates with finite 12th moment. We assume that the population covariance matrix is positive definite and its spectral measure almost surely converges to some limiting probability distribution as the number of variables and the number of observations go to infinity together, with their ratio converging to a finite positive limit. We quantify the relationship between sample and population eigenvectors, by studying the asymptotics of a broad family of functionals that generalizes the Stieltjes transform of the spectral measure. This is then used to compute the asymptotically optimal bias correction for sample eigenvalues, paving the way for a new generation of improved estimators of the covariance matrix and its inverse
    corecore