362 research outputs found

    Tail bounds for all eigenvalues of a sum of random matrices

    Get PDF
    This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in "User-friendly tail bounds for sums of random matrices" (arXiv:1004.4389v6) that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogues of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, on the order of (K^2*r*log(p))/eps^2 samples, where K is the condition number of an optimal rank-r approximation to C, are sufficient to ensure that the dominant r eigenvalues of the covariance matrix of a N(0, C) random vector are estimated to within a factor of 1+-eps with high probability.Comment: 20 pages, 1 figure, see also arXiv:1004.4389v

    The Masked Sample Covariance Estimator: An Analysis via Matrix Concentration Inequalities

    Get PDF
    Covariance estimation becomes challenging in the regime where the number p of variables outstrips the number n of samples available to construct the estimate. One way to circumvent this problem is to assume that the covariance matrix is nearly sparse and to focus on estimating only the significant entries. To analyze this approach, Levina and Vershynin (2011) introduce a formalism called masked covariance estimation, where each entry of the sample covariance estimator is reweighted to reflect an a priori assessment of its importance. This paper provides a short analysis of the masked sample covariance estimator by means of a matrix concentration inequality. The main result applies to general distributions with at least four moments. Specialized to the case of a Gaussian distribution, the theory offers qualitative improvements over earlier work. For example, the new results show that n = O(B log^2 p) samples suffice to estimate a banded covariance matrix with bandwidth B up to a relative spectral-norm error, in contrast to the sample complexity n = O(B log^5 p) obtained by Levina and Vershynin

    Provable Deterministic Leverage Score Sampling

    Full text link
    We explain theoretically a curious empirical phenomenon: "Approximating a matrix by deterministically selecting a subset of its columns with the corresponding largest leverage scores results in a good low-rank matrix surrogate". To obtain provable guarantees, previous work requires randomized sampling of the columns with probabilities proportional to their leverage scores. In this work, we provide a novel theoretical analysis of deterministic leverage score sampling. We show that such deterministic sampling can be provably as accurate as its randomized counterparts, if the leverage scores follow a moderately steep power-law decay. We support this power-law assumption by providing empirical evidence that such decay laws are abundant in real-world data sets. We then demonstrate empirically the performance of deterministic leverage score sampling, which many times matches or outperforms the state-of-the-art techniques.Comment: 20th ACM SIGKDD Conference on Knowledge Discovery and Data Minin

    Familiares y amigos [10th grade]

    Get PDF
    This unit addresses two enduring understandings: cultures evolve over time and who we become is dependent on where we live. Students will demonstrate mastery of knowledge and skills through the creation of an illustrated brochure for a summer study abroad program that compares a Spanish-speaking city or region of their choice to New Orleans. The unit addresses all five categories of National Standards in Foreign Language Education (Communication, Culture, Connections, Comparisons, and Communities), and features a variety of cooperative and communicative learning strategies

    Pueblos y ciudades [10th grade]

    Get PDF
    This unit addresses two enduring understandings: in order to communicate information effectively one must be able to manipulate time and tense and storytelling is a key component of culture and society. Students will demonstrate mastery of knowledge and skills through the composition of a unique legend that tells the story behind hidden treasure, as well as the creation of an accompanying map and directions to its location. This unit addresses all five categories of National Standards in Foreign Language Education (Communication, Culture, Connections, Comparisons, and Communities), and features a variety of cooperative and communicative learning strategies

    Error Bounds for Random Matrix Approximation Schemes

    Get PDF
    Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result of Lata la to study approximation error in the spectral norm. These bounds hold for a reasonable family of random sparsification schemes, those which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the ∞→1 and ∞→2 error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses

    En el vecindario

    Get PDF
    This unit addresses one enduring understanding: success in any situation depends on a person’s ability to communicate information effectively. Students will demonstrate mastery of knowledge and skills through the creation of job advertisements and resumes, as well as participation in a job fair as employers and potential employees. This unit addresses all five categories of National Standards in Foreign Language Education (Communication, Culture, Connections, Comparisons, and Communities), and features a variety of cooperative and communicative learning strategies
    • …
    corecore