6 research outputs found

    Load Balancing with Dynamic Set of Balls and Bins

    Full text link
    In dynamic load balancing, we wish to distribute balls into bins in an environment where both balls and bins can be added and removed. We want to minimize the maximum load of any bin but we also want to minimize the number of balls and bins affected when adding or removing a ball or a bin. We want a hashing-style solution where we given the ID of a ball can find its bin efficiently. We are given a balancing parameter c=1+ϵc=1+\epsilon, where ϵ(0,1)\epsilon\in (0,1). With nn and mm the current numbers of balls and bins, we want no bin with load above C=cn/mC=\lceil c n/m\rceil, referred to as the capacity of the bins. We present a scheme where we can locate a ball checking 1+O(log1/ϵ)1+O(\log 1/\epsilon) bins in expectation. When inserting or deleting a ball, we expect to move O(1/ϵ)O(1/\epsilon) balls, and when inserting or deleting a bin, we expect to move O(C/ϵ)O(C/\epsilon) balls. Previous bounds were off by a factor 1/ϵ1/\epsilon. These bounds are best possible when C=O(1)C=O(1) but for larger CC, we can do much better: Let f=ϵCf=\epsilon C if Clog1/ϵC\leq \log 1/\epsilon, f=ϵClog(1/(ϵC))f=\epsilon\sqrt{C}\cdot \sqrt{\log(1/(\epsilon\sqrt{C}))} if log1/ϵC<12ϵ2\log 1/\epsilon\leq C<\tfrac{1}{2\epsilon^2}, and C=1C=1 if C12ϵ2C\geq \tfrac{1}{2\epsilon^2}. We show that we expect to move O(1/f)O(1/f) balls when inserting or deleting a ball, and O(C/f)O(C/f) balls when inserting or deleting a bin. For the bounds with larger CC, we first have to resolve a much simpler probabilistic problem. Place nn balls in mm bins of capacity CC, one ball at the time. Each ball picks a uniformly random non-full bin. We show that in expectation and with high probability, the fraction of non-full bins is Θ(f)\Theta(f). Then the expected number of bins that a new ball would have to visit to find one that is not full is Θ(1/f)\Theta(1/f). As it turns out, we obtain the same complexity in our more complicated scheme where both balls and bins can be added and removed.Comment: Accepted at STOC'2

    Oblivious Sketching of High-Degree Polynomial Kernels

    Get PDF
    Kernel methods are fundamental tools in machine learning that allow detection of non-linear dependencies between data without explicitly constructing feature vectors in high dimensional spaces. A major disadvantage of kernel methods is their poor scalability: primitives such as kernel PCA or kernel ridge regression generally take prohibitively large quadratic space and (at least) quadratic time, as kernel matrices are usually dense. Some methods for speeding up kernel linear algebra are known, but they all invariably take time exponential in either the dimension of the input point set (e.g., fast multipole methods suffer from the curse of dimensionality) or in the degree of the kernel function. Oblivious sketching has emerged as a powerful approach to speeding up numerical linear algebra over the past decade, but our understanding of oblivious sketching solutions for kernel matrices has remained quite limited, suffering from the aforementioned exponential dependence on input parameters. Our main contribution is a general method for applying sketching solutions developed in numerical linear algebra over the past decade to a tensoring of data points without forming the tensoring explicitly. This leads to the first oblivious sketch for the polynomial kernel with a target dimension that is only polynomially dependent on the degree of the kernel function, as well as the first oblivious sketch for the Gaussian kernel on bounded datasets that does not suffer from an exponential dependence on the dimensionality of input data points

    No Repetition:Fast Streaming with Highly Concentrated Hashing

    No full text
    To get estimators that work within a certain error bound with high probability, a common strategy is to design one that works with constant probability, and then boost the probability using independent repetitions. Important examples of this approach are small space algorithms for estimating the number of distinct elements in a stream, or estimating the set similarity between large sets. Using standard strongly universal hashing to process each element, we get a sketch based estimator where the probability of a too large error is, say, 1/4. By performing rr independent repetitions and taking the median of the estimators, the error probability falls exponentially in rr. However, running rr independent experiments increases the processing time by a factor rr. Here we make the point that if we have a hash function with strong concentration bounds, then we get the same high probability bounds without any need for repetitions. Instead of rr independent sketches, we have a single sketch that is rr times bigger, so the total space is the same. However, we only apply a single hash function, so we save a factor rr in time, and the overall algorithms just get simpler. Fast practical hash functions with strong concentration bounds were recently proposed by Aamand em et al. (to appear in STOC 2020). Using their hashing schemes, the algorithms thus become very fast and practical, suitable for online processing of high volume data streams.Comment: 10 page
    corecore