Sharper Bounds for β„“p\ell_p Sensitivity Sampling

Abstract

In large scale machine learning, random sampling is a popular way to approximate datasets by a small representative subset of examples. In particular, sensitivity sampling is an intensely studied technique which provides provable guarantees on the quality of approximation, while reducing the number of examples to the product of the VC dimension dd and the total sensitivity S\mathfrak S in remarkably general settings. However, guarantees going beyond this general bound of Sd\mathfrak S d are known in perhaps only one setting, for β„“2\ell_2 subspace embeddings, despite intense study of sensitivity sampling in prior work. In this work, we show the first bounds for sensitivity sampling for β„“p\ell_p subspace embeddings for pβ‰ 2p\neq 2 that improve over the general Sd\mathfrak S d bound, achieving a bound of roughly S2/p\mathfrak S^{2/p} for 1≀p<21\leq p<2 and S2βˆ’2/p\mathfrak S^{2-2/p} for 2<p<∞2<p<\infty. For 1≀p<21\leq p<2, we show that this bound is tight, in the sense that there exist matrices for which S2/p\mathfrak S^{2/p} samples is necessary. Furthermore, our techniques yield further new results in the study of sampling algorithms, showing that the root leverage score sampling algorithm achieves a bound of roughly dd for 1≀p<21\leq p<2, and that a combination of leverage score and sensitivity sampling achieves an improved bound of roughly d2/pS2βˆ’4/pd^{2/p}\mathfrak S^{2-4/p} for 2<p<∞2<p<\infty. Our sensitivity sampling results yield the best known sample complexity for a wide class of structured matrices that have small β„“p\ell_p sensitivity.Comment: To appear in ICML 202

    Similar works

    Full text

    thumbnail-image

    Available Versions