348 research outputs found

    Self-Stabilizing Token Distribution with Constant-Space for Trees

    Get PDF
    Self-stabilizing and silent distributed algorithms for token distribution in rooted tree networks are given. Initially, each process of a graph holds at most l tokens. Our goal is to distribute the tokens in the whole network so that every process holds exactly k tokens. In the initial configuration, the total number of tokens in the network may not be equal to nk where n is the number of processes in the network. The root process is given the ability to create a new token or remove a token from the network. We aim to minimize the convergence time, the number of token moves, and the space complexity. A self-stabilizing token distribution algorithm that converges within O(n l) asynchronous rounds and needs Theta(nh epsilon) redundant (or unnecessary) token moves is given, where epsilon = min(k,l-k) and h is the height of the tree network. Two novel ideas to reduce the number of redundant token moves are presented. One reduces the number of redundant token moves to O(nh) without any additional costs while the other reduces the number of redundant token moves to O(n), but increases the convergence time to O(nh l). All algorithms given have constant memory at each process and each link register

    Silent Self-stabilizing BFS Tree Algorithms Revised

    Full text link
    In this paper, we revisit two fundamental results of the self-stabilizing literature about silent BFS spanning tree constructions: the Dolev et al algorithm and the Huang and Chen's algorithm. More precisely, we propose in the composite atomicity model three straightforward adaptations inspired from those algorithms. We then present a deep study of these three algorithms. Our results are related to both correctness (convergence and closure, assuming a distributed unfair daemon) and complexity (analysis of the stabilization time in terms of rounds and steps)

    A self-stabilizing distributed maximum flow algorithm

    Full text link
    This thesis presents a self-stabilizing distributed maximum flow algorithm for a network G = (V, E), where V is a set of nodes in the network and E is a set of edges in the network. The algorithm has two phases: reset phase and preflow-push phase. Fault-tolerance is achieved by using a self-stabilizing paradigm that uses non-masking fault-tolerance embedded repetitions within the algorithm. Two techniques are used in the algorithm, Counter flushing is used to synchronize the network; both local checking and local correction are used to compute the maximum flow of the network. The algorithm handles catastrophic faults by weeding out false information in the network. A network can start with any arbitrary global state and will recover to a legal global state in finite number of steps. Lastly, the network guarantees to restore the legal configuration from any catastrophic faults

    Self-stabilizing routing protocols

    Full text link
    In systems made up of processors and links connecting the processors, the global state of the system is defined by the local variables of the individual processors. The set of global states can be defined as being either legal or illegal. A self-stabilizing system is one that forces a system from an illegal state to a global legal state without external interference, using a finite number of steps. This thesis will concentrate on application of self-stabilization to routing problems, in particular path identification, connectivity and methods involved in destinational routing. Traditional methods for creation of rooted paths to multiple destinations in a computer network involve the creation of spanning trees, and broadcasting information on the tree to be picked up by the individual nodes on the tree. The information for the creation of the tree are all sourced at the root, and the individual nodes update information from the centralized source. The self-stabilization model for networks allows the decision for a creation of a tree and message checking to occur automatically, locally, and more important, in contrast to traditional networks, asynchronously. The creation, message passing occur with a node and its immediate neighbor, and the tree, path is created based on this communicated data. In addition, the self-stabilization model eliminates the requisite initialization of traditional networks, i.e. given any arbitrary initial state the system (a given network) is guaranteed to stabilize to a legal global state, in the case of a broadcast network, a minimal spanning tree rooted at a source

    Optimization in a Self-Stabilizing Service Discovery Framework for Large Scale Systems

    Get PDF
    Ability to find and get services is a key requirement in the development of large-scale distributed sys- tems. We consider dynamic and unstable environments, namely Peer-to-Peer (P2P) systems. In previous work, we designed a service discovery solution called Distributed Lexicographic Placement Table (DLPT), based on a hierar- chical overlay structure. A self-stabilizing version was given using the Propagation of Information with Feedback (PIF) paradigm. In this paper, we introduce the self-stabilizing COPIF (for Collaborative PIF) scheme. An algo- rithm is provided with its correctness proof. We use this approach to improve a distributed P2P framework designed for the services discovery. Significantly efficient experimental results are presented

    Maximum Matching for Anonymous Trees with Constant Space per Process

    Get PDF
    We give a silent self-stabilizing protocol for computing a maximum matching in an anonymous network with a tree topology. The round complexity of our protocol is O(diam), where diam is the diameter of the network, and the step complexity is O(n*diam), where n is the number of processes in the network. The working space complexity is O(1) per process, although the output necessarily takes O(log(delta)) space per process, where delta is the degree of that process. To implement parent pointers in constant space, regardless of degree, we use the cyclic Abelian group Z_7

    Competitive Self-Stabilizing k-Clustering

    Full text link
    A k-cluster of a graph is a connected non-empty subgraph C of radius at most k, i.e., all members of C are within distance k of a particular node of C, called the clusterhead of C. A k-clustering of a graph is a partitioning of the graph into distinct k-clusters. Finding a mini-mum cardinality k-clustering is known to be NP-hard. In this paper, we propose a silent self-stabilizing asynchronous distributed algorithm for con-structing a k-clustering of any connected network with unique IDs. Our algorithm stabilizes in O(n) rounds, using O(log n) space per process, where n is the number of processes. In the general case, our algorithm constructs O(nk) k-clusters. If the network is a Unit Disk Graph (UDG), then our algorithm is 7.2552k + O(1)-competitive, that is, the number of k-clusters constructed by the algorithm is at most 7.2552k + O(1) times the minimum possible number of k-clusters in any k-clustering of the same network. More generally, if the net-work is an Approximate Disk Graph (ADG) with approximation ratio λ, then our algorithm is 7.2552λ2k +O(λ)-competitive. Our solution is based on the self-stabilizing construction of a data structure called the MIS Tree, a spanning tree of the network whose processes at even levels form a maximal indepen-dent set of the network. The MIS tree construction is the time bottleneck of our k-clustering algorithm, as it takes Θ(n) rounds in the worst case, while the remainder of the algorithm takes O(D) rounds, where D is the diameter of the network. We would like to improve that time to be O(D), but we show that our distributed MIS tree construction is a P-complete problem

    Improving Connectionist Energy Minimization

    Full text link
    Symmetric networks designed for energy minimization such as Boltzman machines and Hopfield nets are frequently investigated for use in optimization, constraint satisfaction and approximation of NP-hard problems. Nevertheless, finding a global solution (i.e., a global minimum for the energy function) is not guaranteed and even a local solution may take an exponential number of steps. We propose an improvement to the standard local activation function used for such networks. The improved algorithm guarantees that a global minimum is found in linear time for tree-like subnetworks. The algorithm, called activate, is uniform and does not assume that the network is tree-like. It can identify tree-like subnetworks even in cyclic topologies (arbitrary networks) and avoid local minima along these trees. For acyclic networks, the algorithm is guaranteed to converge to a global minimum from any initial state of the system (self-stabilization) and remains correct under various types of schedulers. On the negative side, we show that in the presence of cycles, no uniform algorithm exists that guarantees optimality even under a sequential asynchronous scheduler. An asynchronous scheduler can activate only one unit at a time while a synchronous scheduler can activate any number of units in a single time step. In addition, no uniform algorithm exists to optimize even acyclic networks when the scheduler is synchronous. Finally, we show how the algorithm can be improved using the cycle-cutset scheme. The general algorithm, called activate-with-cutset, improves over activate and has some performance guarantees that are related to the size of the network's cycle-cutset.Comment: See http://www.jair.org/ for any accompanying file
    • …
    corecore