68 research outputs found

    Two snap-stabilizing point-to-point communication protocols in message-switched networks

    Get PDF
    A snap-stabilizing protocol, starting from any configuration, always behaves according to its specification. In this paper, we present a snap-stabilizing protocol to solve the message forwarding problem in a message-switched network. In this problem, we must manage resources of the system to deliver messages to any processor of the network. In this purpose, we use information given by a routing algorithm. By the context of stabilization (in particular, the system starts in an arbitrary configuration), this information can be corrupted. So, the existence of a snap-stabilizing protocol for the message forwarding problem implies that we can ask the system to begin forwarding messages even if routing information are initially corrupted. In this paper, we propose two snap-stabilizing algorithms (in the state model) for the following specification of the problem: - Any message can be generated in a finite time. - Any emitted message is delivered to its destination once and only once in a finite time. This implies that our protocol can deliver any emitted message regardless of the state of routing tables in the initial configuration. These two algorithms are based on the previous work of [MS78]. Each algorithm needs a particular method to be transform into a snap-stabilizing one but both of them do not introduce a significant overcost in memory or in time with respect to algorithms of [MS78]

    Self-stabilizing algorithms for Connected Vertex Cover and Clique decomposition problems

    Full text link
    In many wireless networks, there is no fixed physical backbone nor centralized network management. The nodes of such a network have to self-organize in order to maintain a virtual backbone used to route messages. Moreover, any node of the network can be a priori at the origin of a malicious attack. Thus, in one hand the backbone must be fault-tolerant and in other hand it can be useful to monitor all network communications to identify an attack as soon as possible. We are interested in the minimum \emph{Connected Vertex Cover} problem, a generalization of the classical minimum Vertex Cover problem, which allows to obtain a connected backbone. Recently, Delbot et al.~\cite{DelbotLP13} proposed a new centralized algorithm with a constant approximation ratio of 22 for this problem. In this paper, we propose a distributed and self-stabilizing version of their algorithm with the same approximation guarantee. To the best knowledge of the authors, it is the first distributed and fault-tolerant algorithm for this problem. The approach followed to solve the considered problem is based on the construction of a connected minimal clique partition. Therefore, we also design the first distributed self-stabilizing algorithm for this problem, which is of independent interest

    Disconnected components detection and rooted shortest-path tree maintenance in networks

    Get PDF
    International audienceMany articles deal with the problem of maintaining a rooted shortest-path tree. However, after some edge deletions, some nodes can be disconnected from the connected component VrV_r of some distinguished node rr. In this case, an additional objective is to ensure the detection of the disconnection by the nodes that no longer belong to VrV_r. We present a detailed analysis of a silent self-stabilizing algorithm. We prove that it solves this more demanding task in anonymous weighted networks with the following additional strong properties: it runs without any knowledge on the network and under the \emph{unfair} daemon, that is without any assumption on the asynchronous model. Moreover, it terminates in less than 2n+D2n+D rounds for a network of nn nodes and hop-diameter DD

    Self-Stabilizing Disconnected Components Detection and Rooted Shortest-Path Tree Maintenance in Polynomial Steps

    Get PDF
    We deal with the problem of maintaining a shortest-path tree rooted at some process r in a network that may be disconnected after topological changes. The goal is then to maintain a shortest-path tree rooted at r in its connected component, V_r, and make all processes of other components detecting that r is not part of their connected component. We propose, in the composite atomicity model, a silent self-stabilizing algorithm for this problem working in semi-anonymous networks under the distributed unfair daemon (the most general daemon) without requiring any a priori knowledge about global parameters of the network. This is the first algorithm for this problem that is proven to achieve a polynomial stabilization time in steps. Namely, we exhibit a bound in O(W_{max} * n_{maxCC}^3 * n), where W_{max} is the maximum weight of an edge, n_{maxCC} is the maximum number of non-root processes in a connected component, and n is the number of processes. The stabilization time in rounds is at most 3n_{maxCC} + D, where D is the hop-diameter of V_r

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem

    Snap-Stabilizing Committee Coordination

    No full text
    International audienceIn the committee coordination problem, a committee consists of a set of professors and committee meetingsare synchronized, so that each professor participates in at most one committee meeting at a time. Inthis paper, we propose two snap-stabilizing distributed algorithms for the committee coordination. Snapstabilizationis a versatile property which requires a distributed algorithm to efficiently tolerate transientfaults. Indeed, after a finite number of such faults, a snap-stabilizing algorithm immediately operates correctly,without any external intervention. We design snap-stabilizing committee coordination algorithmsenriched with some desirable properties related to concurrency, (weak) fairness, and a stronger synchronizationmechanism called 2-Phase Discussion. In our setting, all processes are identical and each processhas a unique identifier. The existing work in the literature has shown that (1) in general, fairness cannotbe achieved in committee coordination, and (2) it becomes feasible if each professor waits for meetingsinfinitely often. Nevertheless, we show that even under this latter assumption, it is impossible to implementa fair solution that allows maximal concurrency. Hence, we propose two orthogonal snap-stabilizingalgorithms, each satisfying 2-phase discussion, and either maximal concurrency or fairness. The algorithmthat implements fairness requires that every professor waits for meetings infinitely often. Moreover, forthis algorithm, we introduce and evaluate a new efficiency criterion called the degree of fair concurrency.This criterion shows that even if it does not satisfy maximal concurrency, our snap-stabilizing fair algorithmstill allows a high level of concurrency

    A Self-Stabilizing K-Clustering Algorithm Using an Arbitrary Metric (Revised Version of RR2008-31)

    Get PDF
    32 pagesMobile ad hoc networks as well as grid platforms are distributed, changing, and error prone environments. Communication costs within such infrastructure can be improved, or at least bounded, by using k-clustering. A k-clustering of a graph, is a partition of the nodes into disjoint sets, called clusters, in which every node is distance at most k from a designated node in its cluster, called the clusterhead. A self-stabilizing asynchronous distributed algorithm is given for constructing a k-clustering of a connected network of processes with unique IDs and weighted edges. The algorithm is comparison-based, takes O(nk) time, and uses O(log n + log k) space per process, where n is the size of the network. This is the first distributed solution to the k-clustering problem on weighted graphs

    A Taxonomy of Daemons in Self-stabilization

    Full text link
    We survey existing scheduling hypotheses made in the literature in self-stabilization, commonly referred to under the notion of daemon. We show that four main characteristics (distribution, fairness, boundedness, and enabledness) are enough to encapsulate the various differences presented in existing work. Our naming scheme makes it easy to compare daemons of particular classes, and to extend existing possibility or impossibility results to new daemons. We further examine existing daemon transformer schemes and provide the exact transformed characteristics of those transformers in our taxonomy.Comment: 26 page
    • …
    corecore