90 research outputs found
High-Dimensional Geometric Streaming in Polynomial Space
Many existing algorithms for streaming geometric data analysis have been
plagued by exponential dependencies in the space complexity, which are
undesirable for processing high-dimensional data sets. In particular, once
, there are no known non-trivial streaming algorithms for problems
such as maintaining convex hulls and L\"owner-John ellipsoids of points,
despite a long line of work in streaming computational geometry since [AHV04].
We simultaneously improve these results to bits of
space by trading off with a factor distortion. We
achieve these results in a unified manner, by designing the first streaming
algorithm for maintaining a coreset for subspace embeddings with
space and distortion. Our
algorithm also gives similar guarantees in the \emph{online coreset} model.
Along the way, we sharpen results for online numerical linear algebra by
replacing a log condition number dependence with a dependence,
answering a question of [BDM+20]. Our techniques provide a novel connection
between leverage scores, a fundamental object in numerical linear algebra, and
computational geometry.
For subspace embeddings, we give nearly optimal trade-offs between
space and distortion for one-pass streaming algorithms. For instance, we give a
deterministic coreset using space and
distortion for , whereas previous deterministic algorithms incurred a
factor in the space or the distortion [CDW18].
Our techniques have implications in the offline setting, where we give
optimal trade-offs between the space complexity and distortion of subspace
sketch data structures. To do this, we give an elementary proof of a "change of
density" theorem of [LT80] and make it algorithmic.Comment: Abstract shortened to meet arXiv limits; v2 fix statements concerning
online condition numbe
New Subset Selection Algorithms for Low Rank Approximation: Offline and Online
Subset selection for the rank approximation of an matrix
offers improvements in the interpretability of matrices, as well as a variety
of computational savings. This problem is well-understood when the error
measure is the Frobenius norm, with various tight algorithms known even in
challenging models such as the online model, where an algorithm must select the
column subset irrevocably when the columns arrive one by one. In contrast, for
other matrix losses, optimal trade-offs between the subset size and
approximation quality have not been settled, even in the offline setting. We
give a number of results towards closing these gaps.
In the offline setting, we achieve nearly optimal bicriteria algorithms in
two settings. First, we remove a factor from a result of [SWZ19] when
the loss function is any entrywise loss with an approximate triangle inequality
and at least linear growth. Our result is tight for the loss. We give
a similar improvement for entrywise losses for , improving a
previous distortion of to . Our results come from a
technique which replaces the use of a well-conditioned basis with a slightly
larger spanning set for which any vector can be expressed as a linear
combination with small Euclidean norm. We show that this technique also gives
the first oblivious subspace embeddings for with distortion, which is nearly optimal and closes a long line of work.
In the online setting, we give the first online subset selection algorithm
for subspace approximation and entrywise low rank
approximation by implementing sensitivity sampling online, which is challenging
due to the sequential nature of sensitivity sampling. Our main technique is an
online algorithm for detecting when an approximately optimal subspace changes
substantially.Comment: To appear in STOC 2023; abstract shortene
-Regression in the Arbitrary Partition Model of Communication
We consider the randomized communication complexity of the distributed
-regression problem in the coordinator model, for . In this
problem, there is a coordinator and servers. The -th server receives
and and the coordinator would like to find a -approximate
solution to . Here
for convenience. This model, where the data is
additively shared across servers, is commonly referred to as the arbitrary
partition model.
We obtain significantly improved bounds for this problem. For , i.e.,
least squares regression, we give the first optimal bound of
bits.
For ,we obtain an upper bound. Notably, for sufficiently large,
our leading order term only depends linearly on rather than
quadratically. We also show communication lower bounds of for and for . Our bounds considerably improve previous bounds due to (Woodruff et al.
COLT, 2013) and (Vempala et al., SODA, 2020)
Sharper Bounds for Sensitivity Sampling
In large scale machine learning, random sampling is a popular way to
approximate datasets by a small representative subset of examples. In
particular, sensitivity sampling is an intensely studied technique which
provides provable guarantees on the quality of approximation, while reducing
the number of examples to the product of the VC dimension and the total
sensitivity in remarkably general settings. However, guarantees
going beyond this general bound of are known in perhaps only
one setting, for subspace embeddings, despite intense study of
sensitivity sampling in prior work. In this work, we show the first bounds for
sensitivity sampling for subspace embeddings for that
improve over the general bound, achieving a bound of roughly
for and for .
For , we show that this bound is tight, in the sense that there
exist matrices for which samples is necessary. Furthermore,
our techniques yield further new results in the study of sampling algorithms,
showing that the root leverage score sampling algorithm achieves a bound of
roughly for , and that a combination of leverage score and
sensitivity sampling achieves an improved bound of roughly for . Our sensitivity sampling results yield the best
known sample complexity for a wide class of structured matrices that have small
sensitivity.Comment: To appear in ICML 202
- β¦