22,042 research outputs found

    Scalable Multiagent Coordination with Distributed Online Open Loop Planning

    Full text link
    We propose distributed online open loop planning (DOOLP), a general framework for online multiagent coordination and decision making under uncertainty. DOOLP is based on online heuristic search in the space defined by a generative model of the domain dynamics, which is exploited by agents to simulate and evaluate the consequences of their potential choices. We also propose distributed online Thompson sampling (DOTS) as an effective instantiation of the DOOLP framework. DOTS models sequences of agent choices by concatenating a number of multiarmed bandits for each agent and uses Thompson sampling for dealing with action value uncertainty. The Bayesian approach underlying Thompson sampling allows to effectively model and estimate uncertainty about (a) own action values and (b) other agents' behavior. This approach yields a principled and statistically sound solution to the exploration-exploitation dilemma when exploring large search spaces with limited resources. We implemented DOTS in a smart factory case study with positive empirical results. We observed effective, robust and scalable planning and coordination capabilities even when only searching a fraction of the potential search space

    Parallel Search with no Coordination

    Get PDF
    We consider a parallel version of a classical Bayesian search problem. kk agents are looking for a treasure that is placed in one of the boxes indexed by N+\mathbb{N}^+ according to a known distribution pp. The aim is to minimize the expected time until the first agent finds it. Searchers run in parallel where at each time step each searcher can "peek" into a box. A basic family of algorithms which are inherently robust is \emph{non-coordinating} algorithms. Such algorithms act independently at each searcher, differing only by their probabilistic choices. We are interested in the price incurred by employing such algorithms when compared with the case of full coordination. We first show that there exists a non-coordination algorithm, that knowing only the relative likelihood of boxes according to pp, has expected running time of at most 10+4(1+1k)2T10+4(1+\frac{1}{k})^2 T, where TT is the expected running time of the best fully coordinated algorithm. This result is obtained by applying a refined version of the main algorithm suggested by Fraigniaud, Korman and Rodeh in STOC'16, which was designed for the context of linear parallel search.We then describe an optimal non-coordinating algorithm for the case where the distribution pp is known. The running time of this algorithm is difficult to analyse in general, but we calculate it for several examples. In the case where pp is uniform over a finite set of boxes, then the algorithm just checks boxes uniformly at random among all non-checked boxes and is essentially 22 times worse than the coordinating algorithm.We also show simple algorithms for Pareto distributions over MM boxes. That is, in the case where p(x)1/xbp(x) \sim 1/x^b for 0<b<10< b < 1, we suggest the following algorithm: at step tt choose uniformly from the boxes unchecked in 1,...,min(M,t/σ){1, . . . ,min(M, \lfloor t/\sigma\rfloor)}, where σ=b/(b+k1)\sigma = b/(b + k - 1). It turns out this algorithm is asymptotically optimal, and runs about 2+b2+b times worse than the case of full coordination

    Scalable Planning and Learning for Multiagent POMDPs: Extended Version

    Get PDF
    Online, sample-based planning algorithms for POMDPs have shown great promise in scaling to problems with large state spaces, but they become intractable for large action and observation spaces. This is particularly problematic in multiagent POMDPs where the action and observation space grows exponentially with the number of agents. To combat this intractability, we propose a novel scalable approach based on sample-based planning and factored value functions that exploits structure present in many multiagent settings. This approach applies not only in the planning case, but also in the Bayesian reinforcement learning setting. Experimental results show that we are able to provide high quality solutions to large multiagent planning and learning problems

    A principled information valuation for communications during multi-agent coordination

    No full text
    Decentralised coordination in multi-agent systems is typically achieved using communication. However, in many cases, communication is expensive to utilise because there is limited bandwidth, it may be dangerous to communicate, or communication may simply be unavailable at times. In this context, we argue for a rational approach to communication --- if it has a cost, the agents should be able to calculate a value of communicating. By doing this, the agents can balance the need to communicate with the cost of doing so. In this research, we present a novel model of rational communication that uses information theory to value communications, and employ this valuation in a decision theoretic coordination mechanism. A preliminary empirical evaluation of the benefits of this approach is presented in the context of the RoboCupRescue simulator

    Optimistic Concurrency Control for Distributed Unsupervised Learning

    Get PDF
    Research on distributed machine learning algorithms has focused primarily on one of two extremes - algorithms that obey strict concurrency constraints or algorithms that obey few or no such constraints. We consider an intermediate alternative in which algorithms optimistically assume that conflicts are unlikely and if conflicts do arise a conflict-resolution protocol is invoked. We view this "optimistic concurrency control" paradigm as particularly appropriate for large-scale machine learning algorithms, particularly in the unsupervised setting. We demonstrate our approach in three problem areas: clustering, feature learning and online facility location. We evaluate our methods via large-scale experiments in a cluster computing environment.Comment: 25 pages, 5 figure
    corecore