39 research outputs found
New Subset Selection Algorithms for Low Rank Approximation: Offline and Online
Subset selection for the rank approximation of an matrix
offers improvements in the interpretability of matrices, as well as a variety
of computational savings. This problem is well-understood when the error
measure is the Frobenius norm, with various tight algorithms known even in
challenging models such as the online model, where an algorithm must select the
column subset irrevocably when the columns arrive one by one. In contrast, for
other matrix losses, optimal trade-offs between the subset size and
approximation quality have not been settled, even in the offline setting. We
give a number of results towards closing these gaps.
In the offline setting, we achieve nearly optimal bicriteria algorithms in
two settings. First, we remove a factor from a result of [SWZ19] when
the loss function is any entrywise loss with an approximate triangle inequality
and at least linear growth. Our result is tight for the loss. We give
a similar improvement for entrywise losses for , improving a
previous distortion of to . Our results come from a
technique which replaces the use of a well-conditioned basis with a slightly
larger spanning set for which any vector can be expressed as a linear
combination with small Euclidean norm. We show that this technique also gives
the first oblivious subspace embeddings for with distortion, which is nearly optimal and closes a long line of work.
In the online setting, we give the first online subset selection algorithm
for subspace approximation and entrywise low rank
approximation by implementing sensitivity sampling online, which is challenging
due to the sequential nature of sensitivity sampling. Our main technique is an
online algorithm for detecting when an approximately optimal subspace changes
substantially.Comment: To appear in STOC 2023; abstract shortene
High-Dimensional Geometric Streaming in Polynomial Space
Many existing algorithms for streaming geometric data analysis have been
plagued by exponential dependencies in the space complexity, which are
undesirable for processing high-dimensional data sets. In particular, once
, there are no known non-trivial streaming algorithms for problems
such as maintaining convex hulls and L\"owner-John ellipsoids of points,
despite a long line of work in streaming computational geometry since [AHV04].
We simultaneously improve these results to bits of
space by trading off with a factor distortion. We
achieve these results in a unified manner, by designing the first streaming
algorithm for maintaining a coreset for subspace embeddings with
space and distortion. Our
algorithm also gives similar guarantees in the \emph{online coreset} model.
Along the way, we sharpen results for online numerical linear algebra by
replacing a log condition number dependence with a dependence,
answering a question of [BDM+20]. Our techniques provide a novel connection
between leverage scores, a fundamental object in numerical linear algebra, and
computational geometry.
For subspace embeddings, we give nearly optimal trade-offs between
space and distortion for one-pass streaming algorithms. For instance, we give a
deterministic coreset using space and
distortion for , whereas previous deterministic algorithms incurred a
factor in the space or the distortion [CDW18].
Our techniques have implications in the offline setting, where we give
optimal trade-offs between the space complexity and distortion of subspace
sketch data structures. To do this, we give an elementary proof of a "change of
density" theorem of [LT80] and make it algorithmic.Comment: Abstract shortened to meet arXiv limits; v2 fix statements concerning
online condition numbe