Subset selection for the rank k approximation of an nΓd matrix A
offers improvements in the interpretability of matrices, as well as a variety
of computational savings. This problem is well-understood when the error
measure is the Frobenius norm, with various tight algorithms known even in
challenging models such as the online model, where an algorithm must select the
column subset irrevocably when the columns arrive one by one. In contrast, for
other matrix losses, optimal trade-offs between the subset size and
approximation quality have not been settled, even in the offline setting. We
give a number of results towards closing these gaps.
In the offline setting, we achieve nearly optimal bicriteria algorithms in
two settings. First, we remove a kβ factor from a result of [SWZ19] when
the loss function is any entrywise loss with an approximate triangle inequality
and at least linear growth. Our result is tight for the β1β loss. We give
a similar improvement for entrywise βpβ losses for p>2, improving a
previous distortion of k1β1/p to k1/2β1/p. Our results come from a
technique which replaces the use of a well-conditioned basis with a slightly
larger spanning set for which any vector can be expressed as a linear
combination with small Euclidean norm. We show that this technique also gives
the first oblivious βpβ subspace embeddings for 1<p<2 with O~(d1/p) distortion, which is nearly optimal and closes a long line of work.
In the online setting, we give the first online subset selection algorithm
for βpβ subspace approximation and entrywise βpβ low rank
approximation by implementing sensitivity sampling online, which is challenging
due to the sequential nature of sensitivity sampling. Our main technique is an
online algorithm for detecting when an approximately optimal subspace changes
substantially.Comment: To appear in STOC 2023; abstract shortene