15 research outputs found

    Self-stabilizing protocol for anonymous oriented bi-directional rings under unfair distributed schedulers with a leader

    Full text link
    We propose a self-stabilizing protocol for anonymous oriented bi-directional rings of any size under unfair distributed schedulers with a leader. The protocol is a randomized self-stabilizing, meaning that starting from an arbitrary configuration it converges (with probability 1) in finite time to a legitimate configuration (i.e. global system state) without the need for explicit exception handler of backward recovery. A fault may throw the system into an illegitimate configuration, but the system will autonomously resume a legitimate configuration, by regarding the current illegitimate configuration as an initial configuration, if the fault is transient. A self-stabilizing system thus tolerates any kind and any finite number of transient faults. The protocol can be used to implement an unfair distributed mutual exclusion in any ring topology network; Keywords: self-stabilizing protocol, anonymous oriented bi-directional ring, unfair distributed schedulers. Ring topology network, non-uniform and anonymous network, self-stabilization, fault tolerance, legitimate configuration

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem

    Separation of Circulating Tokens

    Full text link
    Self-stabilizing distributed control is often modeled by token abstractions. A system with a single token may implement mutual exclusion; a system with multiple tokens may ensure that immediate neighbors do not simultaneously enjoy a privilege. For a cyber-physical system, tokens may represent physical objects whose movement is controlled. The problem studied in this paper is to ensure that a synchronous system with m circulating tokens has at least d distance between tokens. This problem is first considered in a ring where d is given whilst m and the ring size n are unknown. The protocol solving this problem can be uniform, with all processes running the same program, or it can be non-uniform, with some processes acting only as token relays. The protocol for this first problem is simple, and can be expressed with Petri net formalism. A second problem is to maximize d when m is given, and n is unknown. For the second problem, the paper presents a non-uniform protocol with a single corrective process.Comment: 22 pages, 7 figures, epsf and pstricks in LaTe

    A Taxonomy of Daemons in Self-stabilization

    Full text link
    We survey existing scheduling hypotheses made in the literature in self-stabilization, commonly referred to under the notion of daemon. We show that four main characteristics (distribution, fairness, boundedness, and enabledness) are enough to encapsulate the various differences presented in existing work. Our naming scheme makes it easy to compare daemons of particular classes, and to extend existing possibility or impossibility results to new daemons. We further examine existing daemon transformer schemes and provide the exact transformed characteristics of those transformers in our taxonomy.Comment: 26 page

    On the Limits and Practice of Automatically Designing Self-Stabilization

    Get PDF
    A protocol is said to be self-stabilizing when the distributed system executing it is guaranteed to recover from any fault that does not cause permanent damage. Designing such protocols is hard since they must recover from all possible states, therefore we investigate how feasible it is to synthesize them automatically. We show that synthesizing stabilization on a fixed topology is NP-complete in the number of system states. When a solution is found, we further show that verifying its correctness on a general topology (with any number of processes) is undecidable, even for very simple unidirectional rings. Despite these negative results, we develop an algorithm to synthesize a self-stabilizing protocol given its desired topology, legitimate states, and behavior. By analogy to shadow puppetry, where a puppeteer may design a complex puppet to cast a desired shadow, a protocol may need to be designed in a complex way that does not even resemble its specification. Our shadow/puppet synthesis algorithm addresses this concern and, using a complete backtracking search, has automatically designed 4 new self-stabilizing protocols with minimal process space requirements: 2-state maximal matching on bidirectional rings, 5-state token passing on unidirectional rings, 3-state token passing on bidirectional chains, and 4-state orientation on daisy chains

    Liveness of Randomised Parameterised Systems under Arbitrary Schedulers (Technical Report)

    Full text link
    We consider the problem of verifying liveness for systems with a finite, but unbounded, number of processes, commonly known as parameterised systems. Typical examples of such systems include distributed protocols (e.g. for the dining philosopher problem). Unlike the case of verifying safety, proving liveness is still considered extremely challenging, especially in the presence of randomness in the system. In this paper we consider liveness under arbitrary (including unfair) schedulers, which is often considered a desirable property in the literature of self-stabilising systems. We introduce an automatic method of proving liveness for randomised parameterised systems under arbitrary schedulers. Viewing liveness as a two-player reachability game (between Scheduler and Process), our method is a CEGAR approach that synthesises a progress relation for Process that can be symbolically represented as a finite-state automaton. The method is incremental and exploits both Angluin-style L*-learning and SAT-solvers. Our experiments show that our algorithm is able to prove liveness automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). To the best of our knowledge, this is the first fully-automatic method that can prove liveness for randomised protocols.Comment: Full version of CAV'16 pape

    Automated Analysis and Optimization of Distributed Self-Stabilizing Algorithms

    Get PDF
    Self-stabilization [2] is a versatile technique for recovery from erroneous behavior due to transient faults or wrong initialization. A system is self-stabilizing if (1) starting from an arbitrary initial state it can automatically reach a set of legitimate states in a finite number of steps and (2) it remains in legitimate states in the absence of faults. Weak-stabilization [3] and probabilistic-stabilization [4] were later introduced in the literature to deal with resource consumption of self-stabilizing algorithms and impossibility results. Since the system perturbed by fault may deviate from correct behavior for a finite amount of time, it is paramount to minimize this time as much as possible, especially in the domain of robotics and networking. This type of fault tolerance is called non-masking because the faulty behavior is not completely masked from the user [1]. Designing correct stabilizing algorithms can be tedious. Designing such algorithms that satisfy certain average recovery time constraints (e.g., for performance guarantees) adds further complications to this process. Therefore, developing an automatic technique that takes as input the specification of the desired system, and synthesizes as output a stabilizing algorithm with minimum (or other upper bound) average recovery time is useful and challenging. In this thesis, our main focus is on designing automated techniques to optimize the average recovery time of stabilizing systems using model checking and synthesis techniques. First, we prove that synthesizing weak-stabilizing distributed programs from scratch and repairing stabilizing algorithms with average recovery time constraints are NP-complete in the state-space of the program. To cope with this complexity, we propose a polynomial-time heuristic that compared to existing stabilizing algorithms, provides lower average recovery time for many of our case studies. Second, we study the problem of fine tuning of probabilistic-stabilizing systems to improve their performance. We take advantage of the two properties of self-stabilizing algorithms to model them as absorbing discrete-time Markov chains. This will reduce the computation of average recovery time to finding the weighted sum of elements in the inverse of a matrix. Finally, we study the impact of scheduling policies on recovery time of stabilizing systems. We, in particular, propose a method to augment self-stabilizing programs with k-central and k-bounded schedulers to study dierent factors, such as geographical distance of processes and the achievable level of parallelism

    Bernard Brodie and the bomb: at the birth of the bipolar world

    Get PDF
    Bernard Brodie (1910-1978) was a leading 20th century theorist and philosopher of war. A key architect of American nuclear strategy, Brodie was one of the first civilian defense intellectuals to cross over into the military world. This thesis explores Brodie’s evolution as a theorist and his response to the technological innovations that transformed warfare from World War II to the Vietnam War. It situates his theoretical development within the classical theories of Carl von Clausewitz (1780-1831), as Brodie came to be known as “America’s Clausewitz.” While his first influential works focused on naval strategy, his most lasting impact came within the field of nuclear strategic thinking. Brodie helped conceptualize America’s strategy of deterrence, later taking into account America’s loss of nuclear monopoly, the advent of thermonuclear weapons, and proliferation of intercontinental ballistic missiles. Brodie’s strategic and philosophical response to the nuclear age led to his life-long effort to reconcile Clausewitz’s theories of war, which were a direct response to the strategic innovations of the Napoleonic era, to the new challenges of the nuclear age. While today’s world is much changed from the bipolar international order of the Cold War period, contemporary efforts to apply Clausewitzian concepts to today’s conflicts suggests that much can be learned from a similar endeavor by the previous generation as its strategic thinkers struggled to imagine new ways to maintain order in their era of unprecedented nuclear danger.acceptedVersio
    corecore