51 research outputs found

    New Subset Selection Algorithms for Low Rank Approximation: Offline and Online

    Full text link
    Subset selection for the rank kk approximation of an nΓ—dn\times d matrix AA offers improvements in the interpretability of matrices, as well as a variety of computational savings. This problem is well-understood when the error measure is the Frobenius norm, with various tight algorithms known even in challenging models such as the online model, where an algorithm must select the column subset irrevocably when the columns arrive one by one. In contrast, for other matrix losses, optimal trade-offs between the subset size and approximation quality have not been settled, even in the offline setting. We give a number of results towards closing these gaps. In the offline setting, we achieve nearly optimal bicriteria algorithms in two settings. First, we remove a k\sqrt k factor from a result of [SWZ19] when the loss function is any entrywise loss with an approximate triangle inequality and at least linear growth. Our result is tight for the β„“1\ell_1 loss. We give a similar improvement for entrywise β„“p\ell_p losses for p>2p>2, improving a previous distortion of k1βˆ’1/pk^{1-1/p} to k1/2βˆ’1/pk^{1/2-1/p}. Our results come from a technique which replaces the use of a well-conditioned basis with a slightly larger spanning set for which any vector can be expressed as a linear combination with small Euclidean norm. We show that this technique also gives the first oblivious β„“p\ell_p subspace embeddings for 1<p<21<p<2 with O~(d1/p)\tilde O(d^{1/p}) distortion, which is nearly optimal and closes a long line of work. In the online setting, we give the first online subset selection algorithm for β„“p\ell_p subspace approximation and entrywise β„“p\ell_p low rank approximation by implementing sensitivity sampling online, which is challenging due to the sequential nature of sensitivity sampling. Our main technique is an online algorithm for detecting when an approximately optimal subspace changes substantially.Comment: To appear in STOC 2023; abstract shortene

    High-Dimensional Geometric Streaming in Polynomial Space

    Full text link
    Many existing algorithms for streaming geometric data analysis have been plagued by exponential dependencies in the space complexity, which are undesirable for processing high-dimensional data sets. In particular, once dβ‰₯log⁑nd\geq\log n, there are no known non-trivial streaming algorithms for problems such as maintaining convex hulls and L\"owner-John ellipsoids of nn points, despite a long line of work in streaming computational geometry since [AHV04]. We simultaneously improve these results to poly(d,log⁑n)\mathrm{poly}(d,\log n) bits of space by trading off with a poly(d,log⁑n)\mathrm{poly}(d,\log n) factor distortion. We achieve these results in a unified manner, by designing the first streaming algorithm for maintaining a coreset for β„“βˆž\ell_\infty subspace embeddings with poly(d,log⁑n)\mathrm{poly}(d,\log n) space and poly(d,log⁑n)\mathrm{poly}(d,\log n) distortion. Our algorithm also gives similar guarantees in the \emph{online coreset} model. Along the way, we sharpen results for online numerical linear algebra by replacing a log condition number dependence with a log⁑n\log n dependence, answering a question of [BDM+20]. Our techniques provide a novel connection between leverage scores, a fundamental object in numerical linear algebra, and computational geometry. For β„“p\ell_p subspace embeddings, we give nearly optimal trade-offs between space and distortion for one-pass streaming algorithms. For instance, we give a deterministic coreset using O(d2log⁑n)O(d^2\log n) space and O((dlog⁑n)1/2βˆ’1/p)O((d\log n)^{1/2-1/p}) distortion for p>2p>2, whereas previous deterministic algorithms incurred a poly(n)\mathrm{poly}(n) factor in the space or the distortion [CDW18]. Our techniques have implications in the offline setting, where we give optimal trade-offs between the space complexity and distortion of subspace sketch data structures. To do this, we give an elementary proof of a "change of density" theorem of [LT80] and make it algorithmic.Comment: Abstract shortened to meet arXiv limits; v2 fix statements concerning online condition numbe

    Sharper Bounds for β„“p\ell_p Sensitivity Sampling

    Full text link
    In large scale machine learning, random sampling is a popular way to approximate datasets by a small representative subset of examples. In particular, sensitivity sampling is an intensely studied technique which provides provable guarantees on the quality of approximation, while reducing the number of examples to the product of the VC dimension dd and the total sensitivity S\mathfrak S in remarkably general settings. However, guarantees going beyond this general bound of Sd\mathfrak S d are known in perhaps only one setting, for β„“2\ell_2 subspace embeddings, despite intense study of sensitivity sampling in prior work. In this work, we show the first bounds for sensitivity sampling for β„“p\ell_p subspace embeddings for pβ‰ 2p\neq 2 that improve over the general Sd\mathfrak S d bound, achieving a bound of roughly S2/p\mathfrak S^{2/p} for 1≀p<21\leq p<2 and S2βˆ’2/p\mathfrak S^{2-2/p} for 2<p<∞2<p<\infty. For 1≀p<21\leq p<2, we show that this bound is tight, in the sense that there exist matrices for which S2/p\mathfrak S^{2/p} samples is necessary. Furthermore, our techniques yield further new results in the study of sampling algorithms, showing that the root leverage score sampling algorithm achieves a bound of roughly dd for 1≀p<21\leq p<2, and that a combination of leverage score and sensitivity sampling achieves an improved bound of roughly d2/pS2βˆ’4/pd^{2/p}\mathfrak S^{2-4/p} for 2<p<∞2<p<\infty. Our sensitivity sampling results yield the best known sample complexity for a wide class of structured matrices that have small β„“p\ell_p sensitivity.Comment: To appear in ICML 202

    On large-scale probabilistic and statistical data analysis

    Get PDF
    In this manuscript we develop and apply modern algorithmic data reduction techniques to tackle scalability issues and enable statistical data analysis of massive data sets. Our algorithms follow a general scheme, where a reduction technique is applied to the large-scale data to obtain a small summary of sublinear size to which a classical algorithm is applied. The techniques for obtaining these summaries depend on the problem that we want to solve. The size of the summaries is usually parametrized by an approximation parameter, expressing the trade-off between efficiency and accuracy. In some cases the data can be reduced to a size that has no or only negligible dependency on the initial number of data items. However, for other problems it turns out that sublinear summaries do not exist in the worst case. In such situations, we exploit statistical or geometric relaxations to obtain useful sublinear summaries under certain mildness assumptions. We present, in particular, the data reduction methods called coresets and subspace embeddings, and several algorithmic techniques to construct these via random projections and sampling
    • …
    corecore