183 research outputs found

    Exact average message complexity values for distributed election on bidirectional rings of processors

    Get PDF
    International audienceConsider a distributed system of n processors arranged on a ring. All processors are labeled with distinct identity-numbers, but are otherwise identical. In this paper, we make use of combinatorial enumeration methods in permutations and derive the one and the same exact asymptotic value, lJ2nH,,+O(n), of the expected number of messages in both probabilistic and deterministicbidirectional variants of Chang-Roberts distributed election algorithm. This confirms the result of Bodlaender and van Leeuwen (1986) that distributed Ieader finding is indeed strictly more efficient on bidirectional rings of processors than on unidirectional ones

    Average number of messages for distributed leader-finding in rings of processors

    Get PDF
    International audienceConsider a distributed system of n processors arranged on a ring. All processors are labeled with distinct identity-numbers, but are otherwise identical. In this paper, we make use of combinatorial enumeration methods in permutations and derive the one and the same exact asymptotic value, lJ2nH,,+O(n), of the expected number of messages in both probabilistic and deterministicbidirectional variants of Chang-Roberts distributed election algorithm. This confirms the result of Bodlaender and van Leeuwen (1986) that distributed Ieader finding is indeed strictly more efficient on bidirectional rings of processors than on unidirectional ones

    Leader election in synchronous networks

    Get PDF
    Worst, best and average number of messages and running time of leader election algorithms of different distributed systems are analyzed. Among others the known characterizations of the expected number of messages for LCR algorithm and of the worst number of messages of Hirschberg-Sinclair algorithm are improve

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem

    Guessing games and distributed computations in synchronous networks

    Full text link

    Two algorithms for leader election and network size estimation in mobile ad hoc networks

    Get PDF
    We develop two algorithms for important problems in mobile ad hoc networks (MANETs). A MANET is a collection of mobile processors (nodes) which communicate via message passing over wireless links. Each node can communicate directly with other nodes within a specified transmission radius; other communication is accomplished via message relay. Communication links may go up and down in a MANET (as nodes move toward or away from each other); thus, the MANET can consist of multiple connected components, and connected components can split and merge over time. We first present a deterministic leader election algorithm for asynchronous MANETs along with a correctness proof for it. Our work involves substantial modifications of an existing algorithm and its proof, and we adapt the existing algorithm to the asynchronous environment. Our algorithms running time and message complexity compare favorably with existing algorithms for leader election in MANETs. Second, many algorithms for MANETs require or can benefit from knowledge about the size of the network in terms of the number of processors. As such, we present an algorithm to approximately determine the size of a MANET. While the algorithms approximations of network size are only rough ones, the algorithm has the important qualities of requiring little communication overhead and being tolerant of link failures

    Formal Verification of Distributed Systems

    Get PDF
    Fokkink, W.J. [Promotor

    Performance Evaluation of Self-stabilizing Algorithms by Probabilistic Model Checking

    Get PDF
    A self-stabilizing protocol is one that starting from any arbitrary initial state recovers to legitimate states in a finite number of steps, and once it stabilizes to a set of legitimate states, it remains there unless it is perturbed by transient faults. The traditional methods existing for performance evaluation of a self-stabilizing algorithm usually work based on the analysis of worst case computational complexity. Another method that has been commonly used in evaluating these algorithms is simulation, which assumes the system starts from an initial state. Here, it is argued that the traditional methods have shortcomings and do not give enough insight about the behavior of the system. Moreover, they do not provide a decent method of comparison. We propose a novel method for evaluation of self-stabilizing algorithms. This method works based on probabilistic model checking and computation of the expected number of recovery steps. We execute some experiments on the case studies, and the results indicate that we can gain insight about the faults and their structure in the protocol. Next, we explain the difficulty of designing a self-stabilizing algorithm for a system and show how it is impossible to do so for some classes of protocols. This resulted in some relaxation in the definition of self-stabilization. One of the relaxations made in the definition of self-stabilization is weak-stabilization. A weak-stabilizing protocol ensures the existence of a recovery path from an arbitrary initial configuration. Thus, some paths may contain connected components or cycles. Since a weak-stabilizing algorithm may get stuck in connected components forever, we cannot evaluate weak-stabilizing protocols by traditional and existing methods. We calculate the expected number of recovery steps for evaluating weak-stabilization. However, since it does not give us enough intuition about the structure of faults, we apply a graph-theoretic formula for estimating the weak-stabilizing algorithm's performance. This formula is based on the number of cycles and their reachability. Based on the observations we made by performance evaluation of these protocols, we suggest algorithms called state encoding for modifying the performance of the algorithms. State encoding works based on changing the bit mapping of the states of the system. The aim is to make the states with faster recovery steps more probable to occur. There are three algorithms, one of which works based on betweenness centrality which is a measure of centrality of a node within a graph. The other one works based on feedback arc set which is a set of arcs whose removal makes a graph acyclic. The third algorithm works based on the length of the shortest recovery path for the states. The other problem investigated here is the problem of state space explosion in model checking. Similar to traditional methods of model checking, probabilistic model checking also suffers from the problem of state space explosion, i.e., the number of states grows exponentially in terms of the number of components in the distributed system. Abstraction methods, which are described briefly here, are designed to combat this problem. We argue that they are not effcient enough, and there is still the lack of a suffcient abstraction method that works for systems with an arbitrary number of processes. We also propose a new approach for evaluation of an abstraction function. Then, based on the intuition gained, a new abstraction algorithm is proposed that is exclusively designed for verification of reachability properties. After executing experiments on a case study, we compare the result of our algorithm with the results obtained by existing methods. The results support our claim that our method is more effcient and precise

    Notes on Theory of Distributed Systems

    Full text link
    Notes for the Yale course CPSC 465/565 Theory of Distributed Systems
    corecore