1 research outputs found

    Relative Error Tensor Low Rank Approximation

    Full text link
    We consider relative error low rank approximation of tensorstensors with respect to the Frobenius norm: given an order-qq tensor A∈R∏i=1qniA \in \mathbb{R}^{\prod_{i=1}^q n_i}, output a rank-kk tensor BB for which βˆ₯Aβˆ’Bβˆ₯F2≀(1+Ο΅)\|A-B\|_F^2 \leq (1+\epsilon)OPT, where OPT =inf⁑rank-kΒ Aβ€²βˆ₯Aβˆ’Aβ€²βˆ₯F2= \inf_{\textrm{rank-}k~A'} \|A-A'\|_F^2. Despite the success on obtaining relative error low rank approximations for matrices, no such results were known for tensors. One structural issue is that there may be no rank-kk tensor AkA_k achieving the above infinum. Another, computational issue, is that an efficient relative error low rank approximation algorithm for tensors would allow one to compute the rank of a tensor, which is NP-hard. We bypass these issues via (1) bicriteria and (2) parameterized complexity solutions: (1) We give an algorithm which outputs a rank kβ€²=O((k/Ο΅)qβˆ’1)k' = O((k/\epsilon)^{q-1}) tensor BB for which βˆ₯Aβˆ’Bβˆ₯F2≀(1+Ο΅)\|A-B\|_F^2 \leq (1+\epsilon)OPT in nnz(A)+nβ‹…poly(k/Ο΅)nnz(A) + n \cdot \textrm{poly}(k/\epsilon) time in the real RAM model. Here nnz(A)nnz(A) is the number of non-zero entries in AA. (2) We give an algorithm for any Ξ΄>0\delta >0 which outputs a rank kk tensor BB for which βˆ₯Aβˆ’Bβˆ₯F2≀(1+Ο΅)\|A-B\|_F^2 \leq (1+\epsilon)OPT and runs in (nnz(A)+nβ‹…poly(k/Ο΅)+exp⁑(k2/Ο΅))β‹…nΞ΄ ( nnz(A) + n \cdot \textrm{poly}(k/\epsilon) + \exp(k^2/\epsilon) ) \cdot n^\delta time in the unit cost RAM model. For outputting a rank-kk tensor, or even a bicriteria solution with rank-CkCk for a certain constant C>1C > 1, we show a 2Ξ©(k1βˆ’o(1))2^{\Omega(k^{1-o(1)})} time lower bound under the Exponential Time Hypothesis. Our results give the first relative error low rank approximations for tensors for a large number of robust error measures for which nothing was known, as well as column row and tube subset selection. We also obtain new results for matrices, such as nnz(A)nnz(A)-time CUR decompositions, improving previous nnz(A)log⁑nnnz(A)\log n-time algorithms, which may be of independent interest
    corecore