387 research outputs found

    Invertibility of symmetric random matrices

    Full text link
    We study n by n symmetric random matrices H, possibly discrete, with iid above-diagonal entries. We show that H is singular with probability at most exp(-n^c), and the spectral norm of the inverse of H is O(sqrt{n}). Furthermore, the spectrum of H is delocalized on the optimal scale o(n^{-1/2}). These results improve upon a polynomial singularity bound due to Costello, Tao and Vu, and they generalize, up to constant factors, results of Tao and Vu, and Erdos, Schlein and Yau.Comment: 53 pages. Minor corrections, changes in presentation. To appear in Random Structures and Algorithm

    Invertibility of random matrices: unitary and orthogonal perturbations

    Full text link
    We show that a perturbation of any fixed square matrix D by a random unitary matrix is well invertible with high probability. A similar result holds for perturbations by random orthogonal matrices; the only notable exception is when D is close to orthogonal. As an application, these results completely eliminate a hard-to-check condition from the Single Ring Theorem by Guionnet, Krishnapur and Zeitouni.Comment: 46 pages. A more general result on orthogonal perturbations of complex matrices added. It rectified an inaccuracy in application to Single Ring Theorem for orthogonal matrice

    Sampling from large matrices: an approach through geometric functional analysis

    Full text link
    We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(r log r) with a small error in the spectral norm, where r = ||A||_F^2 / ||A||_2^2 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables.Comment: Our initial claim about Max-2-CSP problems is corrected. We put an exponential failure probability for the algorithm for low-rank approximations. Proofs are a little more explaine

    Geometric approach to error correcting codes and reconstruction of signals

    Full text link
    We develop an approach through geometric functional analysis to error correcting codes and to reconstruction of signals from few linear measurements. An error correcting code encodes an n-letter word x into an m-letter word y in such a way that x can be decoded correctly when any r letters of y are corrupted. We prove that most linear orthogonal transformations Q from R^n into R^m form efficient and robust robust error correcting codes over reals. The decoder (which corrects the corrupted components of y) is the metric projection onto the range of Q in the L_1 norm. An equivalent problem arises in signal processing: how to reconstruct a signal that belongs to a small class from few linear measurements? We prove that for most sets of Gaussian measurements, all signals of small support can be exactly reconstructed by the L_1 norm minimization. This is a substantial improvement of recent results of Donoho and of Candes and Tao. An equivalent problem in combinatorial geometry is the existence of a polytope with fixed number of facets and maximal number of lower-dimensional facets. We prove that most sections of the cube form such polytopes.Comment: 17 pages, 3 figure
    corecore