7,723 research outputs found

    Impossibility of dimension reduction in the nuclear norm

    Full text link
    Let S1\mathsf{S}_1 (the Schatten--von Neumann trace class) denote the Banach space of all compact linear operators T:β„“2β†’β„“2T:\ell_2\to \ell_2 whose nuclear norm βˆ₯Tβˆ₯S1=βˆ‘j=1βˆžΟƒj(T)\|T\|_{\mathsf{S}_1}=\sum_{j=1}^\infty\sigma_j(T) is finite, where {Οƒj(T)}j=1∞\{\sigma_j(T)\}_{j=1}^\infty are the singular values of TT. We prove that for arbitrarily large n∈Nn\in \mathbb{N} there exists a subset CβŠ†S1\mathcal{C}\subseteq \mathsf{S}_1 with ∣C∣=n|\mathcal{C}|=n that cannot be embedded with bi-Lipschitz distortion O(1)O(1) into any no(1)n^{o(1)}-dimensional linear subspace of S1\mathsf{S}_1. C\mathcal{C} is not even a O(1)O(1)-Lipschitz quotient of any subset of any no(1)n^{o(1)}-dimensional linear subspace of S1\mathsf{S}_1. Thus, S1\mathsf{S}_1 does not admit a dimension reduction result \'a la Johnson and Lindenstrauss (1984), which complements the work of Harrow, Montanaro and Short (2011) on the limitations of quantum dimension reduction under the assumption that the embedding into low dimensions is a quantum channel. Such a statement was previously known with S1\mathsf{S}_1 replaced by the Banach space β„“1\ell_1 of absolutely summable sequences via the work of Brinkman and Charikar (2003). In fact, the above set C\mathcal{C} can be taken to be the same set as the one that Brinkman and Charikar considered, viewed as a collection of diagonal matrices in S1\mathsf{S}_1. The challenge is to demonstrate that C\mathcal{C} cannot be faithfully realized in an arbitrary low-dimensional subspace of S1\mathsf{S}_1, while Brinkman and Charikar obtained such an assertion only for subspaces of S1\mathsf{S}_1 that consist of diagonal operators (i.e., subspaces of β„“1\ell_1). We establish this by proving that the Markov 2-convexity constant of any finite dimensional linear subspace XX of S1\mathsf{S}_1 is at most a universal constant multiple of log⁑dim(X)\sqrt{\log \mathrm{dim}(X)}

    Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression

    Full text link
    Low-distortion embeddings are critical building blocks for developing random sampling and random projection algorithms for linear algebra problems. We show that, given a matrix A∈RnΓ—dA \in \R^{n \times d} with n≫dn \gg d and a p∈[1,2)p \in [1, 2), with a constant probability, we can construct a low-distortion embedding matrix \Pi \in \R^{O(\poly(d)) \times n} that embeds \A_p, the β„“p\ell_p subspace spanned by AA's columns, into (\R^{O(\poly(d))}, \| \cdot \|_p); the distortion of our embeddings is only O(\poly(d)), and we can compute Ξ A\Pi A in O(\nnz(A)) time, i.e., input-sparsity time. Our result generalizes the input-sparsity time β„“2\ell_2 subspace embedding by Clarkson and Woodruff [STOC'13]; and for completeness, we present a simpler and improved analysis of their construction for β„“2\ell_2. These input-sparsity time β„“p\ell_p embeddings are optimal, up to constants, in terms of their running time; and the improved running time propagates to applications such as (1Β±Ο΅)(1\pm \epsilon)-distortion β„“p\ell_p subspace embedding and relative-error β„“p\ell_p regression. For β„“2\ell_2, we show that a (1+Ο΅)(1+\epsilon)-approximate solution to the β„“2\ell_2 regression problem specified by the matrix AA and a vector b∈Rnb \in \R^n can be computed in O(\nnz(A) + d^3 \log(d/\epsilon) /\epsilon^2) time; and for β„“p\ell_p, via a subspace-preserving sampling procedure, we show that a (1Β±Ο΅)(1\pm \epsilon)-distortion embedding of \A_p into \R^{O(\poly(d))} can be computed in O(\nnz(A) \cdot \log n) time, and we also show that a (1+Ο΅)(1+\epsilon)-approximate solution to the β„“p\ell_p regression problem min⁑x∈Rdβˆ₯Axβˆ’bβˆ₯p\min_{x \in \R^d} \|A x - b\|_p can be computed in O(\nnz(A) \cdot \log n + \poly(d) \log(1/\epsilon)/\epsilon^2) time. Moreover, we can improve the embedding dimension or equivalently the sample size to O(d3+p/2log⁑(1/Ο΅)/Ο΅2)O(d^{3+p/2} \log(1/\epsilon) / \epsilon^2) without increasing the complexity.Comment: 22 page
    • …
    corecore