64 research outputs found

    The coalescing-branching random walk on expanders and the dual epidemic process

    Get PDF
    Information propagation on graphs is a fundamental topic in distributed computing. One of the simplest models of information propagation is the push protocol in which at each round each agent independently pushes the current knowledge to a random neighbour. In this paper we study the so-called coalescing-branching random walk (COBRA), in which each vertex pushes the information to kk randomly selected neighbours and then stops passing information until it receives the information again. The aim of COBRA is to propagate information fast but with a limited number of transmissions per vertex per step. In this paper we study the cover time of the COBRA process defined as the minimum time until each vertex has received the information at least once. Our main result says that if GG is an nn-vertex rr-regular graph whose transition matrix has second eigenvalue λ\lambda, then the COBRA cover time of GG is O(log⁥n)\mathcal O(\log n ), if 1−λ1-\lambda is greater than a positive constant, and O((log⁥n)/(1−λ)3))\mathcal O((\log n)/(1-\lambda)^3)), if 1−λ≫log⁥(n)/n1-\lambda \gg \sqrt{\log( n)/n}. These bounds are independent of rr and hold for 3≀r≀n−13 \le r \le n-1. They improve the previous bound of O(log⁥2n)O(\log^2 n) for expander graphs. Our main tool in analysing the COBRA process is a novel duality relation between this process and a discrete epidemic process, which we call a biased infection with persistent source (BIPS). A fixed vertex vv is the source of an infection and remains permanently infected. At each step each vertex uu other than vv selects kk neighbours, independently and uniformly, and uu is infected in this step if and only if at least one of the selected neighbours has been infected in the previous step. We show the duality between COBRA and BIPS which says that the time to infect the whole graph in the BIPS process is of the same order as the cover time of the COBRA proces

    A macro-level model for investigating the effect of directional bias on network coverage

    Full text link
    Random walks have been proposed as a simple method of efficiently searching, or disseminating information throughout, communication and sensor networks. In nature, animals (such as ants) tend to follow correlated random walks, i.e., random walks that are biased towards their current heading. In this paper, we investigate whether or not complementing random walks with directional bias can decrease the expected discovery and coverage times in networks. To do so, we develop a macro-level model of a directionally biased random walk based on Markov chains. By focussing on regular, connected networks, the model allows us to efficiently calculate expected coverage times for different network sizes and biases. Our analysis shows that directional bias can significantly reduce coverage time, but only when the bias is below a certain value which is dependent on the network size.Comment: 15 page

    A general lower bound for collaborative tree exploration

    Full text link
    We consider collaborative graph exploration with a set of kk agents. All agents start at a common vertex of an initially unknown graph and need to collectively visit all other vertices. We assume agents are deterministic, vertices are distinguishable, moves are simultaneous, and we allow agents to communicate globally. For this setting, we give the first non-trivial lower bounds that bridge the gap between small (k≀nk \leq \sqrt n) and large (k≄nk \geq n) teams of agents. Remarkably, our bounds tightly connect to existing results in both domains. First, we significantly extend a lower bound of Ω(log⁥k/log⁥log⁥k)\Omega(\log k / \log\log k) by Dynia et al. on the competitive ratio of a collaborative tree exploration strategy to the range k≀nlog⁥cnk \leq n \log^c n for any c∈Nc \in \mathbb{N}. Second, we provide a tight lower bound on the number of agents needed for any competitive exploration algorithm. In particular, we show that any collaborative tree exploration algorithm with k=Dn1+o(1)k = Dn^{1+o(1)} agents has a competitive ratio of ω(1)\omega(1), while Dereniowski et al. gave an algorithm with k=Dn1+Δk = Dn^{1+\varepsilon} agents and competitive ratio O(1)O(1), for any Δ>0\varepsilon > 0 and with DD denoting the diameter of the graph. Lastly, we show that, for any exploration algorithm using k=nk = n agents, there exist trees of arbitrarily large height DD that require Ω(D2)\Omega(D^2) rounds, and we provide a simple algorithm that matches this bound for all trees

    Ants: Mobile Finite State Machines

    Full text link
    Consider the Ants Nearby Treasure Search (ANTS) problem introduced by Feinerman, Korman, Lotker, and Sereni (PODC 2012), where nn mobile agents, initially placed at the origin of an infinite grid, collaboratively search for an adversarially hidden treasure. In this paper, the model of Feinerman et al. is adapted such that the agents are controlled by a (randomized) finite state machine: they possess a constant-size memory and are able to communicate with each other through constant-size messages. Despite the restriction to constant-size memory, we show that their collaborative performance remains the same by presenting a distributed algorithm that matches a lower bound established by Feinerman et al. on the run-time of any ANTS algorithm

    Bayesian Inference of Online Social Network Statistics via Lightweight Random Walk Crawls

    Get PDF
    Online social networks (OSN) contain extensive amount of information about the underlying society that is yet to be explored. One of the most feasible technique to fetch information from OSN, crawling through Application Programming Interface (API) requests, poses serious concerns over the the guarantees of the estimates. In this work, we focus on making reliable statistical inference with limited API crawls. Based on regenerative properties of the random walks, we propose an unbiased estimator for the aggregated sum of functions over edges and proved the connection between variance of the estimator and spectral gap. In order to facilitate Bayesian inference on the true value of the estimator, we derive the approximate posterior distribution of the estimate. Later the proposed ideas are validated with numerical experiments on inference problems in real-world networks

    Collaborative search on the plane without communication

    Get PDF
    We generalize the classical cow-path problem [7, 14, 38, 39] into a question that is relevant for collective foraging in animal groups. Specifically, we consider a setting in which k identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. Our focus is on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if agents do not commence the search in synchrony then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, k. It turns out that the knowledge of k by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is Ω\Omega(D + D 2 /k), and we show in this paper that this bound can be matched if the agents have knowledge of k up to some constant approximation. We present an almost tight bound for the competitive penalty that must be paid, in the running time, if agents have no information about k. Specifically, on the negative side, we show that in such a case, there is no algorithm whose competitiveness is O(log k). On the other hand, we show that for every constant \epsilon \textgreater{} 0, there exists a rather simple uniform search algorithm which is O(log⁥1+Ï”k)O( \log^{1+\epsilon} k)-competitive. In addition, we give a lower bound for the setting in which agents are given some estimation of k. As a special case, this lower bound implies that for any constant \epsilon \textgreater{} 0, if each agent is given a (one-sided) kÏ”k^\epsilon-approximation to k, then the competitiveness is Ω\Omega(log k). Informally, our results imply that the agents can potentially perform well without any knowledge of their total number k, however, to further improve, they must be given a relatively good approximation of k. Finally, we propose a uniform algorithm that is both efficient and extremely simple suggesting its relevance for actual biological scenarios
    • 

    corecore