1,635 research outputs found

    Collaborative search on the plane without communication

    Get PDF
    We generalize the classical cow-path problem [7, 14, 38, 39] into a question that is relevant for collective foraging in animal groups. Specifically, we consider a setting in which k identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. Our focus is on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if agents do not commence the search in synchrony then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, k. It turns out that the knowledge of k by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is Ω\Omega(D + D 2 /k), and we show in this paper that this bound can be matched if the agents have knowledge of k up to some constant approximation. We present an almost tight bound for the competitive penalty that must be paid, in the running time, if agents have no information about k. Specifically, on the negative side, we show that in such a case, there is no algorithm whose competitiveness is O(log k). On the other hand, we show that for every constant \epsilon \textgreater{} 0, there exists a rather simple uniform search algorithm which is O(log1+ϵk)O( \log^{1+\epsilon} k)-competitive. In addition, we give a lower bound for the setting in which agents are given some estimation of k. As a special case, this lower bound implies that for any constant \epsilon \textgreater{} 0, if each agent is given a (one-sided) kϵk^\epsilon-approximation to k, then the competitiveness is Ω\Omega(log k). Informally, our results imply that the agents can potentially perform well without any knowledge of their total number k, however, to further improve, they must be given a relatively good approximation of k. Finally, we propose a uniform algorithm that is both efficient and extremely simple suggesting its relevance for actual biological scenarios

    Trade-offs between Selection Complexity and Performance when Searching the Plane without Communication

    Get PDF
    We consider the ANTS problem [Feinerman et al.] in which a group of agents collaboratively search for a target in a two-dimensional plane. Because this problem is inspired by the behavior of biological species, we argue that in addition to studying the {\em time complexity} of solutions it is also important to study the {\em selection complexity}, a measure of how likely a given algorithmic strategy is to arise in nature due to selective pressures. In more detail, we propose a new selection complexity metric χ\chi, defined for algorithm A{\cal A} such that χ(A)=b+log\chi({\cal A}) = b + \log \ell, where bb is the number of memory bits used by each agent and \ell bounds the fineness of available probabilities (agents use probabilities of at least 1/21/2^\ell). In this paper, we study the trade-off between the standard performance metric of speed-up, which measures how the expected time to find the target improves with nn, and our new selection metric. In particular, consider nn agents searching for a treasure located at (unknown) distance DD from the origin (where nn is sub-exponential in DD). For this problem, we identify loglogD\log \log D as a crucial threshold for our selection complexity metric. We first prove a new upper bound that achieves a near-optimal speed-up of (D2/n+D)2O()(D^2/n +D) \cdot 2^{O(\ell)} for χ(A)3loglogD+O(1)\chi({\cal A}) \leq 3 \log \log D + O(1). In particular, for O(1)\ell \in O(1), the speed-up is asymptotically optimal. By comparison, the existing results for this problem [Feinerman et al.] that achieve similar speed-up require χ(A)=Ω(logD)\chi({\cal A}) = \Omega(\log D). We then show that this threshold is tight by describing a lower bound showing that if χ(A)<loglogDω(1)\chi({\cal A}) < \log \log D - \omega(1), then with high probability the target is not found within D2o(1)D^{2-o(1)} moves per agent. Hence, there is a sizable gap to the straightforward Ω(D2/n+D)\Omega(D^2/n + D) lower bound in this setting.Comment: appears in PODC 201

    Parallel Search with no Coordination

    Get PDF
    We consider a parallel version of a classical Bayesian search problem. kk agents are looking for a treasure that is placed in one of the boxes indexed by N+\mathbb{N}^+ according to a known distribution pp. The aim is to minimize the expected time until the first agent finds it. Searchers run in parallel where at each time step each searcher can "peek" into a box. A basic family of algorithms which are inherently robust is \emph{non-coordinating} algorithms. Such algorithms act independently at each searcher, differing only by their probabilistic choices. We are interested in the price incurred by employing such algorithms when compared with the case of full coordination. We first show that there exists a non-coordination algorithm, that knowing only the relative likelihood of boxes according to pp, has expected running time of at most 10+4(1+1k)2T10+4(1+\frac{1}{k})^2 T, where TT is the expected running time of the best fully coordinated algorithm. This result is obtained by applying a refined version of the main algorithm suggested by Fraigniaud, Korman and Rodeh in STOC'16, which was designed for the context of linear parallel search.We then describe an optimal non-coordinating algorithm for the case where the distribution pp is known. The running time of this algorithm is difficult to analyse in general, but we calculate it for several examples. In the case where pp is uniform over a finite set of boxes, then the algorithm just checks boxes uniformly at random among all non-checked boxes and is essentially 22 times worse than the coordinating algorithm.We also show simple algorithms for Pareto distributions over MM boxes. That is, in the case where p(x)1/xbp(x) \sim 1/x^b for 0<b<10< b < 1, we suggest the following algorithm: at step tt choose uniformly from the boxes unchecked in 1,...,min(M,t/σ){1, . . . ,min(M, \lfloor t/\sigma\rfloor)}, where σ=b/(b+k1)\sigma = b/(b + k - 1). It turns out this algorithm is asymptotically optimal, and runs about 2+b2+b times worse than the case of full coordination

    Book reports

    Get PDF

    Algorithmic Graph Theory

    Get PDF
    The main focus of this workshop was on mathematical techniques needed for the development of efficient solutions and algorithms for computationally difficult graph problems. The techniques studied at the workshhop included: the probabilistic method and randomized algorithms, approximation and optimization, structured families of graphs and approximation algorithms for large problems. The workshop Algorithmic Graph Theory was attended by 46 participants, many of them being young researchers. In 15 survey talks an overview of recent developments in Algorithmic Graph Theory was given. These talks were supplemented by 10 shorter talks and by two special sessions

    Seventh Biennial Report : June 2003 - March 2005

    No full text
    corecore