24 research outputs found

    Singletons for Simpletons: Revisiting Windowed Backoff with Chernoff Bounds

    Get PDF
    Backoff algorithms are used in many distributed systems where multiple devices contend for a shared resource. For the classic balls-into-bins problem, the number of singletons - those bins with a single ball - is important to the analysis of several backoff algorithms; however, existing analyses employ advanced probabilistic tools to obtain concentration bounds. Here, we show that standard Chernoff bounds can be used instead, and the simplicity of this approach is illustrated by re-analyzing some well-known backoff algorithms

    Peer review’s irremediable flaws: Scientists’ perspectives on grant evaluation in Germany

    Get PDF
    Peer review has developed over time to become the established procedure for assessing and assuring the scientific quality of research. Nevertheless, the procedure has also been variously criticized as conservative, biased, and unfair, among other things. Do scientists regard all these flaws as equally problematic? Do they have the same opinions on which problems are so serious that other selection procedures ought to be considered? The answers to these questions hints at what should be modified in peer review processes as a priority objective. The authors of this paper use survey data to examine how members of the scientific community weight different shortcomings of peer review processes. Which of those processes’ problems do they consider less relevant? Which problems, on the other hand, do they judge to be beyond remedy? Our investigation shows that certain defects of peer review processes are indeed deemed irreparable: (1) legitimate quandaries in the process of fine-tuning the choice between equally eligible research proposals and in the selection of daring ideas; and (2) illegitimate problems due to networks. Science-policy measures to improve peer review processes should therefore clarify the distinction between field-specific remediable and irremediable flaws than is currently the case

    Self-stabilizing Balls & Bins in Batches: The Power of Leaky Bins

    Get PDF
    A fundamental problem in distributed computing is the distribution of requests to a set of uniform servers without a centralized controller. Classically, such problems are modelled as static balls into bins processes, where m balls (tasks) are to be distributed to n bins (servers). In a seminal work, [Azar et al.; JoC'99] proposed the sequential strategy Greedy[d] for n = m. When thrown, a ball queries the load of d random bins and is allocated to a least loaded of these. [Azar et al.; JoC'99] showed that d=2 yields an exponential improvement compared to d=1. [Berenbrink et al.; JoC'06] extended this to m ⇒ n, showing that the maximal load difference is independent of m for d=2 (in contrast to d=1). We propose a new variant of an infinite balls into bins process. In each round an expected number of λ n new balls arrive and are distributed (in parallel) to the bins and each non-empty bin deletes one of its balls. This setting models a set of servers processing incoming requests, where clients can query a server's current load but receive no information about parallel requests. We study the Greedy[d] distribution scheme in this setting and show a strong self-stabilizing property: For any arrival rate λ=λ(n) < 1, the system load is time-invariant. Moreover, for any (even super-exponential) round t, the maximum system load is (w.h.p.) O(1 over 1-λ•logn over 1-λ) for d=1 and O(log n over 1-λ) for d=2. In particular, Greedy[2] has an exponentially smaller system load for high arrival rates

    Tight bounds for parallel randomized load balancing

    Get PDF
    Given a distributed system of n balls and n bins, how evenly can we distribute the balls to the bins, minimizing communication? The fastest non-adaptive and symmetric algorithm achieving a constant maximum bin load requires Θ(loglogn) rounds, and any such algorithm running for r∈O(1) rounds incurs a bin load of Ω((logn/loglogn)1/r). In this work, we explore the fundamental limits of the general problem. We present a simple adaptive symmetric algorithm that achieves a bin load of 2 in log∗n+O(1) communication rounds using O(n) messages in total. Our main result, however, is a matching lower bound of (1−o(1))log∗n on the time complexity of symmetric algorithms that guarantee small bin loads. The essential preconditions of the proof are (i) a limit of O(n) on the total number of messages sent by the algorithm and (ii) anonymity of bins, i.e., the port numberings of balls need not be globally consistent. In order to show that our technique yields indeed tight bounds, we provide for each assumption an algorithm violating it, in turn achieving a constant maximum bin load in constant time.German Research Foundation (DFG, reference number Le 3107/1-1)Society of Swiss Friends of the Weizmann Institute of ScienceSwiss National Fun

    The Power of Filling in Balanced Allocations

    Get PDF
    It is well known that if mm balls (jobs) are placed sequentially into nn bins (servers) according to the One-Choice protocol - choose a single bin in each round and allocate one ball to it - then, for mnm \gg n, the gap between the maximum and average load diverges. Many refinements of the One-Choice protocol have been studied that achieve a gap that remains bounded by a function of nn, for any mm. However most of these variations, such as Two-Choice, are less sample-efficient than One-Choice, in the sense that for each allocated ball more than one sample is needed (in expectation). We introduce a new class of processes which are primarily characterized by "filling" underloaded bins. A prototypical example is the Packing process: At each round we only take one bin sample, if the load is below the average load, then we place as many balls until the average load is reached; otherwise, we place only one ball. We prove that for any process in this class the gap between the maximum and average load is O(logn)\mathcal{O}(\log n) for any number of balls mm. For the Packing process, we also prove a matching lower bound. We also prove that the Packing process is more sample-efficient than One-Choice, that is, it allocates on average more than one ball per sample. Finally, we also demonstrate that the upper bound of O(logn)\mathcal{O}(\log n) on the gap can be extended to the Caching process (a.k.a. memory protocol) studied by Mitzenmacher, Prabhakar and Shah (2002).Comment: This paper refines and extends the content on filling processes in arXiv:2110.10759. It consists of 31 pages, 6 figures, 2 table

    An Improved Drift Theorem for Balanced Allocations

    Full text link
    In the balanced allocations framework, there are mm jobs (balls) to be allocated to nn servers (bins). The goal is to minimize the gap, the difference between the maximum and the average load. Peres, Talwar and Wieder (RSA 2015) used the hyperbolic cosine potential function to analyze a large family of allocation processes including the (1+β)(1+\beta)-process and graphical balanced allocations. The key ingredient was to prove that the potential drops in every step, i.e., a drift inequality. In this work we improve the drift inequality so that (i) it is asymptotically tighter, (ii) it assumes weaker preconditions, (iii) it applies not only to processes allocating to more than one bin in a single step and (iv) to processes allocating a varying number of balls depending on the sampled bin. Our applications include the processes of (RSA 2015), but also several new processes, and we believe that our techniques may lead to further results in future work.Comment: This paper refines and extends the content on the drift theorem and applications in arXiv:2203.13902. It consists of 38 pages, 7 figures, 1 tabl
    corecore