42 research outputs found

    The coalescing-branching random walk on expanders and the dual epidemic process

    Get PDF
    Information propagation on graphs is a fundamental topic in distributed computing. One of the simplest models of information propagation is the push protocol in which at each round each agent independently pushes the current knowledge to a random neighbour. In this paper we study the so-called coalescing-branching random walk (COBRA), in which each vertex pushes the information to kk randomly selected neighbours and then stops passing information until it receives the information again. The aim of COBRA is to propagate information fast but with a limited number of transmissions per vertex per step. In this paper we study the cover time of the COBRA process defined as the minimum time until each vertex has received the information at least once. Our main result says that if GG is an nn-vertex rr-regular graph whose transition matrix has second eigenvalue λ\lambda, then the COBRA cover time of GG is O(logn)\mathcal O(\log n ), if 1λ1-\lambda is greater than a positive constant, and O((logn)/(1λ)3))\mathcal O((\log n)/(1-\lambda)^3)), if 1λlog(n)/n1-\lambda \gg \sqrt{\log( n)/n}. These bounds are independent of rr and hold for 3rn13 \le r \le n-1. They improve the previous bound of O(log2n)O(\log^2 n) for expander graphs. Our main tool in analysing the COBRA process is a novel duality relation between this process and a discrete epidemic process, which we call a biased infection with persistent source (BIPS). A fixed vertex vv is the source of an infection and remains permanently infected. At each step each vertex uu other than vv selects kk neighbours, independently and uniformly, and uu is infected in this step if and only if at least one of the selected neighbours has been infected in the previous step. We show the duality between COBRA and BIPS which says that the time to infect the whole graph in the BIPS process is of the same order as the cover time of the COBRA proces

    Collaborative search on the plane without communication

    Get PDF
    We generalize the classical cow-path problem [7, 14, 38, 39] into a question that is relevant for collective foraging in animal groups. Specifically, we consider a setting in which k identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. Our focus is on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if agents do not commence the search in synchrony then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, k. It turns out that the knowledge of k by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is Ω\Omega(D + D 2 /k), and we show in this paper that this bound can be matched if the agents have knowledge of k up to some constant approximation. We present an almost tight bound for the competitive penalty that must be paid, in the running time, if agents have no information about k. Specifically, on the negative side, we show that in such a case, there is no algorithm whose competitiveness is O(log k). On the other hand, we show that for every constant \epsilon \textgreater{} 0, there exists a rather simple uniform search algorithm which is O(log1+ϵk)O( \log^{1+\epsilon} k)-competitive. In addition, we give a lower bound for the setting in which agents are given some estimation of k. As a special case, this lower bound implies that for any constant \epsilon \textgreater{} 0, if each agent is given a (one-sided) kϵk^\epsilon-approximation to k, then the competitiveness is Ω\Omega(log k). Informally, our results imply that the agents can potentially perform well without any knowledge of their total number k, however, to further improve, they must be given a relatively good approximation of k. Finally, we propose a uniform algorithm that is both efficient and extremely simple suggesting its relevance for actual biological scenarios

    Parallel Exhaustive Search without Coordination

    Get PDF
    We analyze parallel algorithms in the context of exhaustive search over totally ordered sets. Imagine an infinite list of "boxes", with a "treasure" hidden in one of them, where the boxes' order reflects the importance of finding the treasure in a given box. At each time step, a search protocol executed by a searcher has the ability to peek into one box, and see whether the treasure is present or not. By equally dividing the workload between them, kk searchers can find the treasure kk times faster than one searcher. However, this straightforward strategy is very sensitive to failures (e.g., crashes of processors), and overcoming this issue seems to require a large amount of communication. We therefore address the question of designing parallel search algorithms maximizing their speed-up and maintaining high levels of robustness, while minimizing the amount of resources for coordination. Based on the observation that algorithms that avoid communication are inherently robust, we analyze the best running time performance of non-coordinating algorithms. Specifically, we devise non-coordinating algorithms that achieve a speed-up of 9/89/8 for two searchers, a speed-up of 4/34/3 for three searchers, and in general, a speed-up of k4(1+1/k)2\frac{k}{4}(1+1/k)^2 for any k1k\geq 1 searchers. Thus, asymptotically, the speed-up is only four times worse compared to the case of full-coordination, and our algorithms are surprisingly simple and hence applicable. Moreover, these bounds are tight in a strong sense as no non-coordinating search algorithm can achieve better speed-ups. Overall, we highlight that, in faulty contexts in which coordination between the searchers is technically difficult to implement, intrusive with respect to privacy, and/or costly in term of resources, it might well be worth giving up on coordination, and simply run our non-coordinating exhaustive search algorithms

    Balanced Allocation on Graphs: A Random Walk Approach

    Full text link
    In this paper we propose algorithms for allocating nn sequential balls into nn bins that are interconnected as a dd-regular nn-vertex graph GG, where d3d\ge3 can be any integer.Let ll be a given positive integer. In each round tt, 1tn1\le t\le n, ball tt picks a node of GG uniformly at random and performs a non-backtracking random walk of length ll from the chosen node.Then it allocates itself on one of the visited nodes with minimum load (ties are broken uniformly at random). Suppose that GG has a sufficiently large girth and d=ω(logn)d=\omega(\log n). Then we establish an upper bound for the maximum number of balls at any bin after allocating nn balls by the algorithm, called {\it maximum load}, in terms of ll with high probability. We also show that the upper bound is at most an O(loglogn)O(\log\log n) factor above the lower bound that is proved for the algorithm. In particular, we show that if we set l=(logn)1+ϵ2l=\lfloor(\log n)^{\frac{1+\epsilon}{2}}\rfloor, for every constant ϵ(0,1)\epsilon\in (0, 1), and GG has girth at least ω(l)\omega(l), then the maximum load attained by the algorithm is bounded by O(1/ϵ)O(1/\epsilon) with high probability.Finally, we slightly modify the algorithm to have similar results for balanced allocation on dd-regular graph with d[3,O(logn)]d\in[3, O(\log n)] and sufficiently large girth

    Exploring an Infinite Space with Finite Memory Scouts

    Full text link
    Consider a small number of scouts exploring the infinite dd-dimensional grid with the aim of hitting a hidden target point. Each scout is controlled by a probabilistic finite automaton that determines its movement (to a neighboring grid point) based on its current state. The scouts, that operate under a fully synchronous schedule, communicate with each other (in a way that affects their respective states) when they share the same grid point and operate independently otherwise. Our main research question is: How many scouts are required to guarantee that the target admits a finite mean hitting time? Recently, it was shown that d+1d + 1 is an upper bound on the answer to this question for any dimension d1d \geq 1 and the main contribution of this paper comes in the form of proving that this bound is tight for d{1,2}d \in \{ 1, 2 \}.Comment: Added (forgotten) acknowledgement

    Parallel Search with no Coordination

    Get PDF
    We consider a parallel version of a classical Bayesian search problem. kk agents are looking for a treasure that is placed in one of the boxes indexed by N+\mathbb{N}^+ according to a known distribution pp. The aim is to minimize the expected time until the first agent finds it. Searchers run in parallel where at each time step each searcher can "peek" into a box. A basic family of algorithms which are inherently robust is \emph{non-coordinating} algorithms. Such algorithms act independently at each searcher, differing only by their probabilistic choices. We are interested in the price incurred by employing such algorithms when compared with the case of full coordination. We first show that there exists a non-coordination algorithm, that knowing only the relative likelihood of boxes according to pp, has expected running time of at most 10+4(1+1k)2T10+4(1+\frac{1}{k})^2 T, where TT is the expected running time of the best fully coordinated algorithm. This result is obtained by applying a refined version of the main algorithm suggested by Fraigniaud, Korman and Rodeh in STOC'16, which was designed for the context of linear parallel search.We then describe an optimal non-coordinating algorithm for the case where the distribution pp is known. The running time of this algorithm is difficult to analyse in general, but we calculate it for several examples. In the case where pp is uniform over a finite set of boxes, then the algorithm just checks boxes uniformly at random among all non-checked boxes and is essentially 22 times worse than the coordinating algorithm.We also show simple algorithms for Pareto distributions over MM boxes. That is, in the case where p(x)1/xbp(x) \sim 1/x^b for 0<b<10< b < 1, we suggest the following algorithm: at step tt choose uniformly from the boxes unchecked in 1,...,min(M,t/σ){1, . . . ,min(M, \lfloor t/\sigma\rfloor)}, where σ=b/(b+k1)\sigma = b/(b + k - 1). It turns out this algorithm is asymptotically optimal, and runs about 2+b2+b times worse than the case of full coordination
    corecore