10,840 research outputs found

    Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression

    Full text link
    Low-distortion embeddings are critical building blocks for developing random sampling and random projection algorithms for linear algebra problems. We show that, given a matrix A∈RnΓ—dA \in \R^{n \times d} with n≫dn \gg d and a p∈[1,2)p \in [1, 2), with a constant probability, we can construct a low-distortion embedding matrix \Pi \in \R^{O(\poly(d)) \times n} that embeds \A_p, the β„“p\ell_p subspace spanned by AA's columns, into (\R^{O(\poly(d))}, \| \cdot \|_p); the distortion of our embeddings is only O(\poly(d)), and we can compute Ξ A\Pi A in O(\nnz(A)) time, i.e., input-sparsity time. Our result generalizes the input-sparsity time β„“2\ell_2 subspace embedding by Clarkson and Woodruff [STOC'13]; and for completeness, we present a simpler and improved analysis of their construction for β„“2\ell_2. These input-sparsity time β„“p\ell_p embeddings are optimal, up to constants, in terms of their running time; and the improved running time propagates to applications such as (1Β±Ο΅)(1\pm \epsilon)-distortion β„“p\ell_p subspace embedding and relative-error β„“p\ell_p regression. For β„“2\ell_2, we show that a (1+Ο΅)(1+\epsilon)-approximate solution to the β„“2\ell_2 regression problem specified by the matrix AA and a vector b∈Rnb \in \R^n can be computed in O(\nnz(A) + d^3 \log(d/\epsilon) /\epsilon^2) time; and for β„“p\ell_p, via a subspace-preserving sampling procedure, we show that a (1Β±Ο΅)(1\pm \epsilon)-distortion embedding of \A_p into \R^{O(\poly(d))} can be computed in O(\nnz(A) \cdot \log n) time, and we also show that a (1+Ο΅)(1+\epsilon)-approximate solution to the β„“p\ell_p regression problem min⁑x∈Rdβˆ₯Axβˆ’bβˆ₯p\min_{x \in \R^d} \|A x - b\|_p can be computed in O(\nnz(A) \cdot \log n + \poly(d) \log(1/\epsilon)/\epsilon^2) time. Moreover, we can improve the embedding dimension or equivalently the sample size to O(d3+p/2log⁑(1/Ο΅)/Ο΅2)O(d^{3+p/2} \log(1/\epsilon) / \epsilon^2) without increasing the complexity.Comment: 22 page

    Uniform Sampling for Matrix Approximation

    Full text link
    Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Unfortunately, leverage scores are difficult to compute. A simple alternative is to sample rows uniformly at random. While this often works, uniform sampling will eliminate critical row information for many natural instances. We take a fresh look at uniform sampling by examining what information it does preserve. Specifically, we show that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original. While this weak form of approximation is not enough for solving linear regression directly, it is enough to compute a better approximation. This observation leads to simple iterative row sampling algorithms for matrix approximation that run in input-sparsity time and preserve row structure and sparsity at all intermediate steps. In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows

    Optimal CUR Matrix Decompositions

    Full text link
    The CUR decomposition of an mΓ—nm \times n matrix AA finds an mΓ—cm \times c matrix CC with a subset of c<nc < n columns of A,A, together with an rΓ—nr \times n matrix RR with a subset of r<mr < m rows of A,A, as well as a cΓ—rc \times r low-rank matrix UU such that the matrix CURC U R approximates the matrix A,A, that is, ∣∣Aβˆ’CUR∣∣F2≀(1+Ο΅)∣∣Aβˆ’Ak∣∣F2 || A - CUR ||_F^2 \le (1+\epsilon) || A - A_k||_F^2, where ∣∣.∣∣F||.||_F denotes the Frobenius norm and AkA_k is the best mΓ—nm \times n matrix of rank kk constructed via the SVD. We present input-sparsity-time and deterministic algorithms for constructing such a CUR decomposition where c=O(k/Ο΅)c=O(k/\epsilon) and r=O(k/Ο΅)r=O(k/\epsilon) and rank(U)=k(U) = k. Up to constant factors, our algorithms are simultaneously optimal in c,r,c, r, and rank(U)(U).Comment: small revision in lemma 4.
    • …
    corecore