6 research outputs found
Load Balancing with Dynamic Set of Balls and Bins
In dynamic load balancing, we wish to distribute balls into bins in an
environment where both balls and bins can be added and removed. We want to
minimize the maximum load of any bin but we also want to minimize the number of
balls and bins affected when adding or removing a ball or a bin. We want a
hashing-style solution where we given the ID of a ball can find its bin
efficiently.
We are given a balancing parameter , where .
With and the current numbers of balls and bins, we want no bin with
load above , referred to as the capacity of the bins.
We present a scheme where we can locate a ball checking bins in expectation. When inserting or deleting a ball, we expect
to move balls, and when inserting or deleting a bin, we expect
to move balls. Previous bounds were off by a factor
.
These bounds are best possible when but for larger , we can do
much better: Let if ,
if , and if . We show that we expect to move balls when
inserting or deleting a ball, and balls when inserting or deleting a
bin.
For the bounds with larger , we first have to resolve a much simpler
probabilistic problem. Place balls in bins of capacity , one ball at
the time. Each ball picks a uniformly random non-full bin. We show that in
expectation and with high probability, the fraction of non-full bins is
. Then the expected number of bins that a new ball would have to
visit to find one that is not full is . As it turns out, we obtain
the same complexity in our more complicated scheme where both balls and bins
can be added and removed.Comment: Accepted at STOC'2
Oblivious Sketching of High-Degree Polynomial Kernels
Kernel methods are fundamental tools in machine learning that allow detection
of non-linear dependencies between data without explicitly constructing feature
vectors in high dimensional spaces. A major disadvantage of kernel methods is
their poor scalability: primitives such as kernel PCA or kernel ridge
regression generally take prohibitively large quadratic space and (at least)
quadratic time, as kernel matrices are usually dense. Some methods for speeding
up kernel linear algebra are known, but they all invariably take time
exponential in either the dimension of the input point set (e.g., fast
multipole methods suffer from the curse of dimensionality) or in the degree of
the kernel function.
Oblivious sketching has emerged as a powerful approach to speeding up
numerical linear algebra over the past decade, but our understanding of
oblivious sketching solutions for kernel matrices has remained quite limited,
suffering from the aforementioned exponential dependence on input parameters.
Our main contribution is a general method for applying sketching solutions
developed in numerical linear algebra over the past decade to a tensoring of
data points without forming the tensoring explicitly. This leads to the first
oblivious sketch for the polynomial kernel with a target dimension that is only
polynomially dependent on the degree of the kernel function, as well as the
first oblivious sketch for the Gaussian kernel on bounded datasets that does
not suffer from an exponential dependence on the dimensionality of input data
points
No Repetition:Fast Streaming with Highly Concentrated Hashing
To get estimators that work within a certain error bound with high
probability, a common strategy is to design one that works with constant
probability, and then boost the probability using independent repetitions.
Important examples of this approach are small space algorithms for estimating
the number of distinct elements in a stream, or estimating the set similarity
between large sets. Using standard strongly universal hashing to process each
element, we get a sketch based estimator where the probability of a too large
error is, say, 1/4. By performing independent repetitions and taking the
median of the estimators, the error probability falls exponentially in .
However, running independent experiments increases the processing time by a
factor .
Here we make the point that if we have a hash function with strong
concentration bounds, then we get the same high probability bounds without any
need for repetitions. Instead of independent sketches, we have a single
sketch that is times bigger, so the total space is the same. However, we
only apply a single hash function, so we save a factor in time, and the
overall algorithms just get simpler.
Fast practical hash functions with strong concentration bounds were recently
proposed by Aamand em et al. (to appear in STOC 2020). Using their hashing
schemes, the algorithms thus become very fast and practical, suitable for
online processing of high volume data streams.Comment: 10 page