44 research outputs found

    New Subset Selection Algorithms for Low Rank Approximation: Offline and Online

    Full text link
    Subset selection for the rank kk approximation of an nΓ—dn\times d matrix AA offers improvements in the interpretability of matrices, as well as a variety of computational savings. This problem is well-understood when the error measure is the Frobenius norm, with various tight algorithms known even in challenging models such as the online model, where an algorithm must select the column subset irrevocably when the columns arrive one by one. In contrast, for other matrix losses, optimal trade-offs between the subset size and approximation quality have not been settled, even in the offline setting. We give a number of results towards closing these gaps. In the offline setting, we achieve nearly optimal bicriteria algorithms in two settings. First, we remove a k\sqrt k factor from a result of [SWZ19] when the loss function is any entrywise loss with an approximate triangle inequality and at least linear growth. Our result is tight for the β„“1\ell_1 loss. We give a similar improvement for entrywise β„“p\ell_p losses for p>2p>2, improving a previous distortion of k1βˆ’1/pk^{1-1/p} to k1/2βˆ’1/pk^{1/2-1/p}. Our results come from a technique which replaces the use of a well-conditioned basis with a slightly larger spanning set for which any vector can be expressed as a linear combination with small Euclidean norm. We show that this technique also gives the first oblivious β„“p\ell_p subspace embeddings for 1<p<21<p<2 with O~(d1/p)\tilde O(d^{1/p}) distortion, which is nearly optimal and closes a long line of work. In the online setting, we give the first online subset selection algorithm for β„“p\ell_p subspace approximation and entrywise β„“p\ell_p low rank approximation by implementing sensitivity sampling online, which is challenging due to the sequential nature of sensitivity sampling. Our main technique is an online algorithm for detecting when an approximately optimal subspace changes substantially.Comment: To appear in STOC 2023; abstract shortene

    β„“p\ell_p-Regression in the Arbitrary Partition Model of Communication

    Full text link
    We consider the randomized communication complexity of the distributed β„“p\ell_p-regression problem in the coordinator model, for p∈(0,2]p\in (0,2]. In this problem, there is a coordinator and ss servers. The ii-th server receives Ai∈{βˆ’M,βˆ’M+1,…,M}nΓ—dA^i\in\{-M, -M+1, \ldots, M\}^{n\times d} and bi∈{βˆ’M,βˆ’M+1,…,M}nb^i\in\{-M, -M+1, \ldots, M\}^n and the coordinator would like to find a (1+Ο΅)(1+\epsilon)-approximate solution to min⁑x∈Rnβˆ₯(βˆ‘iAi)xβˆ’(βˆ‘ibi)βˆ₯p\min_{x\in\mathbb{R}^n} \|(\sum_i A^i)x - (\sum_i b^i)\|_p. Here M≀poly(nd)M \leq \mathrm{poly}(nd) for convenience. This model, where the data is additively shared across servers, is commonly referred to as the arbitrary partition model. We obtain significantly improved bounds for this problem. For p=2p = 2, i.e., least squares regression, we give the first optimal bound of Θ~(sd2+sd/Ο΅)\tilde{\Theta}(sd^2 + sd/\epsilon) bits. For p∈(1,2)p \in (1,2),we obtain an O~(sd2/Ο΅+sd/poly(Ο΅))\tilde{O}(sd^2/\epsilon + sd/\mathrm{poly}(\epsilon)) upper bound. Notably, for dd sufficiently large, our leading order term only depends linearly on 1/Ο΅1/\epsilon rather than quadratically. We also show communication lower bounds of Ξ©(sd2+sd/Ο΅2)\Omega(sd^2 + sd/\epsilon^2) for p∈(0,1]p\in (0,1] and Ξ©(sd2+sd/Ο΅)\Omega(sd^2 + sd/\epsilon) for p∈(1,2]p\in (1,2]. Our bounds considerably improve previous bounds due to (Woodruff et al. COLT, 2013) and (Vempala et al., SODA, 2020)
    corecore