52 research outputs found

    Universal Loop-Free Super-Stabilization

    Get PDF
    We propose an univesal scheme to design loop-free and super-stabilizing protocols for constructing spanning trees optimizing any tree metrics (not only those that are isomorphic to a shortest path tree). Our scheme combines a novel super-stabilizing loop-free BFS with an existing self-stabilizing spanning tree that optimizes a given metric. The composition result preserves the best properties of both worlds: super-stabilization, loop-freedom, and optimization of the original metric without any stabilization time penalty. As case study we apply our composition mechanism to two well known metric-dependent spanning trees: the maximum-flow tree and the minimum degree spanning tree

    Efficient, Superstabilizing Decentralised Optimisation for Dynamic Task Allocation Environments

    No full text
    Decentralised optimisation is a key issue for multi-agent systems, and while many solution techniques have been developed, few provide support for dynamic environments, which change over time, such as disaster management. Given this, in this paper, we present Bounded Fast Max Sum (BFMS): a novel, dynamic, superstabilizing algorithm which provides a bounded approximate solution to certain classes of distributed constraint optimisation problems. We achieve this by eliminating dependencies in the constraint functions, according to how much impact they have on the overall solution value. In more detail, we propose iGHS, which computes a maximum spanning tree on subsections of the constraint graph, in order to reduce communication and computation overheads. Given this, we empirically evaluate BFMS, which shows that BFMS reduces communication and computation done by Bounded Max Sum by up to 99%, while obtaining 60-88% of the optimal utility

    A framework for proving the self-organization of dynamic systems

    Get PDF
    This paper aims at providing a rigorous definition of self- organization, one of the most desired properties for dynamic systems (e.g., peer-to-peer systems, sensor networks, cooperative robotics, or ad-hoc networks). We characterize different classes of self-organization through liveness and safety properties that both capture information re- garding the system entropy. We illustrate these classes through study cases. The first ones are two representative P2P overlays (CAN and Pas- try) and the others are specific implementations of \Omega (the leader oracle) and one-shot query abstractions for dynamic settings. Our study aims at understanding the limits and respective power of existing self-organized protocols and lays the basis of designing robust algorithm for dynamic systems

    A Superstabilizing log⁥(n)\log(n)-Approximation Algorithm for Dynamic Steiner Trees

    Get PDF
    In this paper we design and prove correct a fully dynamic distributed algorithm for maintaining an approximate Steiner tree that connects via a minimum-weight spanning tree a subset of nodes of a network (referred as Steiner members or Steiner group) . Steiner trees are good candidates to efficiently implement communication primitives such as publish/subscribe or multicast, essential building blocks for the new emergent networks (e.g. P2P, sensor or adhoc networks). The cost of the solution returned by our algorithm is at most log⁥∣S∣\log |S| times the cost of an optimal solution, where SS is the group of members. Our algorithm improves over existing solutions in several ways. First, it tolerates the dynamism of both the group members and the network. Next, our algorithm is self-stabilizing, that is, it copes with nodes memory corruption. Last but not least, our algorithm is \emph{superstabilizing}. That is, while converging to a correct configuration (i.e., a Steiner tree) after a modification of the network, it keeps offering the Steiner tree service during the stabilization time to all members that have not been affected by this modification

    Necessary and sufficient conditions for 1-adaptivity

    Full text link

    Self-Stabilizing TDMA Algorithms for Dynamic Wireless Ad-hoc Networks

    Get PDF
    In dynamic wireless ad-hoc networks (DynWANs), autonomous computing devices set up a network for the communication needs of the moment. These networks require the implementation of a medium access control (MAC) layer. We consider MAC protocols for DynWANs that need to be autonomous and robust as well as have high bandwidth utilization, high predictability degree of bandwidth allocation, and low communication delay in the presence of frequent topological changes to the communication network. Recent studies have shown that existing implementations cannot guarantee the necessary satisfaction of these timing requirements. We propose a self-stabilizing MAC algorithm for DynWANs that guarantees a short convergence period, and by that, it can facilitate the satisfaction of severe timing requirements, such as the above. Besides the contribution in the algorithmic front of research, we expect that our proposal can enable quicker adoption by practitioners and faster deployment of DynWANs that are subject changes in the network topology

    Superstabilizing, Fault-containing Multiagent Combinatorial Optimization

    Get PDF
    Self stabilization in distributed systems is the ability of a system to respond to transient failures by eventually reaching a legal state, and maintaining it afterwards. This makes such systems particularly interesting because they can tolerate faults, and are able to cope with dynamic environments. In this paper we propose the first self stabilizing mechanism for multiagent combinatorial optimization, which stabilizes in a state corresponding to the optimal solution of the optimization problem. Our algorithm is based on dynamic programming, and requires a linear number of messages to find the optimal solution in the absence of faults. We show how our algorithm can be made super-stabilizing, in the sense that while transiting from one stable state to the next, our system preserves the assignments from the previous optimal state (similar to a "last-known-good" state), until the new optimal solution is found (without "random" changes to the variables). We offer equal bounds for the stabilization and the superstabilization time. Furthermore, we describe a general scheme for fault containment and fast response time upon low impact failures. Multiple, isolated failures are handled effectively. To show the merits of our approach we report on experiments with practical sized distributed meeting scheduling problems in a multiagent system

    Population stability: regulating size in the presence of an adversary

    Full text link
    We introduce a new coordination problem in distributed computing that we call the population stability problem. A system of agents each with limited memory and communication, as well as the ability to replicate and self-destruct, is subjected to attacks by a worst-case adversary that can at a bounded rate (1) delete agents chosen arbitrarily and (2) insert additional agents with arbitrary initial state into the system. The goal is perpetually to maintain a population whose size is within a constant factor of the target size NN. The problem is inspired by the ability of complex biological systems composed of a multitude of memory-limited individual cells to maintain a stable population size in an adverse environment. Such biological mechanisms allow organisms to heal after trauma or to recover from excessive cell proliferation caused by inflammation, disease, or normal development. We present a population stability protocol in a communication model that is a synchronous variant of the population model of Angluin et al. In each round, pairs of agents selected at random meet and exchange messages, where at least a constant fraction of agents is matched in each round. Our protocol uses three-bit messages and ω(log⁥2N)\omega(\log^2 N) states per agent. We emphasize that our protocol can handle an adversary that can both insert and delete agents, a setting in which existing approximate counting techniques do not seem to apply. The protocol relies on a novel coloring strategy in which the population size is encoded in the variance of the distribution of colors. Individual agents can locally obtain a weak estimate of the population size by sampling from the distribution, and make individual decisions that robustly maintain a stable global population size
    • 

    corecore