5,027 research outputs found

    Self-stabilizing algorithms for Connected Vertex Cover and Clique decomposition problems

    Full text link
    In many wireless networks, there is no fixed physical backbone nor centralized network management. The nodes of such a network have to self-organize in order to maintain a virtual backbone used to route messages. Moreover, any node of the network can be a priori at the origin of a malicious attack. Thus, in one hand the backbone must be fault-tolerant and in other hand it can be useful to monitor all network communications to identify an attack as soon as possible. We are interested in the minimum \emph{Connected Vertex Cover} problem, a generalization of the classical minimum Vertex Cover problem, which allows to obtain a connected backbone. Recently, Delbot et al.~\cite{DelbotLP13} proposed a new centralized algorithm with a constant approximation ratio of 22 for this problem. In this paper, we propose a distributed and self-stabilizing version of their algorithm with the same approximation guarantee. To the best knowledge of the authors, it is the first distributed and fault-tolerant algorithm for this problem. The approach followed to solve the considered problem is based on the construction of a connected minimal clique partition. Therefore, we also design the first distributed self-stabilizing algorithm for this problem, which is of independent interest

    Memory lower bounds for deterministic self-stabilization

    Full text link
    In the context of self-stabilization, a \emph{silent} algorithm guarantees that the register of every node does not change once the algorithm has stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent algorithm must use a memory of Ω(logn)\Omega(\log n) bits per register in nn-node networks. Similarly, Korman et al. [Dist. Comp. '07] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning trees (MST), every silent algorithm must use a memory of Ω(log2n)\Omega(\log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for general algorithms, also established at the end of the 90's, is due to Beauquier et al.~[PODC '99] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing a tight lower bound of Θ(logΔ+loglogn)\Theta(\log \Delta+\log \log n) bits per register for self-stabilizing algorithms solving (Δ+1)(\Delta+1)-coloring or constructing a spanning tree in networks of maximum degree~Δ\Delta. The lower bound Ω(loglogn)\Omega(\log \log n) bits per register also holds for leader election

    A Framework for Certified Self-Stabilization

    No full text
    We propose a general framework to build certified proofs of distributed self-stabilizing algorithms with the proof assistant Coq. We first define in Coq the locally shared memory model with composite atomicity, the most commonly used model in the self-stabilizing area. We then validate our framework by certifying a non trivial part of an existing silent self-stabilizing algorithm which builds a kk-hop dominating set of the network. We also certified a quantitative property related to the output of this algorithm. Precisely, we show that the computed kk-hop dominating set contains at most n1k+1+1\lfloor \frac{n-1}{k+1} \rfloor + 1 nodes, where nn is the number of nodes in the network. To obtain these results, we also developed a library which contains general tools related to potential functions and cardinality of sets

    Self-stabilizing cache placements in Manets

    Full text link
    In ad-hoc networks mobile nodes communicate with each other using other nodes in the network as routers. Each node acts as a router, forwarding data packets for other nodes. There are many dynamic routing protocols to find routes between the communicating nodes. The bandwidth and power are limited in MANETs. Although routing is important in MANETs, the final task of MANETs is Data accessing. So, there is need to implement new techniques apart from routing for data access to save bandwidth and power. If some of the nodes in MANET is provided some of the services from internet Service Provider, then the other nodes also want to access these services. Then, there is a need for caching these services to reduce bandwidth and power; Caching the internet based services in MANETs is an important technique to reduce bandwidth, energy consumption and latency. If some of the nodes store the object data and code and acts as a cache proxies, then nodes near the cache proxies can get the requested data from the cache proxy rather than from a far away server node saving bandwidth and access latency; In this thesis research, we design a distributed self-stabilizing algorithm to place the caches in MANETs. If a node requests the service, it will search for the service and if that service is located in a node that is at a distance greater than D, then the requested node caches the data. In our algorithm, nodes that cache the same data will be at a distance greater than D. We also describe an algorithm to have the shortest path from the source of the data object to all the nodes that cache the same data in the network. This path is used to update the DATA that is cached in the nodes. We propose the algorithm for a single service or DATA. We can implement this algorithm in parallel for all the services available in the MANET; A self-stabilizing system has the ability to automatically recover to normal behavior in case of transient faults without a centralized control. The proposed algorithm does not require any initialization, that is, starting from an arbitrary state, it is guaranteed to satisfy its specification in finite steps. The protocol can handle various types of faults

    Fast and compact self-stabilizing verification, computation, and fault detection of an MST

    Get PDF
    This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms.In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of Minimum Spanning Tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it.This involves several parts that may be considered contributions in themselves.First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., O(logn)O(\log n) bits per node), and whose time complexity is O(log2n)O(\log ^2 n) in synchronous networks, or O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, where Δ\Delta is the maximum degree of nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991). We also show that Ω(logn)\Omega(\log n) time is necessary, even in synchronous networks. Another property is that if ff faults occurred, then, within the requireddetection time above, they are detected by some node in the O(flogn)O(f\log n) locality of each of the faults.Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, O(n)O(n), is significantly better even than that of previous algorithms. (The time complexity of previous MST algorithms that used Ω(log2n)\Omega(\log^2 n) memory bits per node was O(n2)O(n^2), and the time for optimal space algorithms was O(nE)O(n|E|).) Inherited from our proof labelling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within O(log2n)O(\log ^2 n) time in synchronous networks, or within O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, and (2) if ff faults occurred, then, within the required detection time above, they are detected within the O(flogn)O(f\log n) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory

    Self-stabilizing k-clustering in mobile ad hoc networks

    Full text link
    In this thesis, two silent self-stabilizing asynchronous distributed algorithms are given for constructing a k-clustering of a connected network of processes. These are the first self-stabilizing solutions to this problem. One algorithm, FLOOD, takes O( k) time and uses O(k log n) space per process, while the second algorithm, BFS-MIS-CLSTR, takes O(n) time and uses O(log n) space; where n is the size of the network. Processes have unique IDs, and there is no designated leader. BFS-MIS-CLSTR solves three problems; it elects a leader and constructs a BFS tree for the network, constructs a minimal independent set, and finally a k-clustering. Finding a minimal k-clustering is known to be NP -hard. If the network is a unit disk graph in a plane, BFS-MIS-CLSTR is within a factor of O(7.2552k) of choosing the minimal number of clusters; A lower bound is given, showing that any comparison-based algorithm for the k-clustering problem that takes o( diam) rounds has very bad worst case performance; Keywords: BFS tree construction, K-clustering, leader election, MIS construction, self-stabilization, unit disk graph

    Self-Stabilizing Distributed Cooperative Reset

    Get PDF
    Self-stabilization is a versatile fault-tolerance approach that characterizes the ability of a system to eventually resume a correct behavior after any finite number of transient faults. In this paper, we propose a self-stabilizing reset algorithm working in anonymous networks. This algorithm resets the network in a distributed non-centralized manner, i.e., it is multi-initiator, as each process detecting an inconsistency may initiate a reset. It is also cooperative in the sense that it coordinates concurrent reset executions in order to gain efficiency. Our approach is general since our reset algorithm allows to build self-stabilizing solutions for various problems and settings. As a matter of facts, we show that it applies to both static and dynamic specifications since we propose efficient self-stabilizing reset-based algorithms for the (1-minimal) (f, g)-alliance (a generalization of the dominating set problem) in identified networks and the unison problem in anonymous networks. Notice that these two latter instantiations enhance the state of the art. Indeed, in the former case, our solution is more general than the previous ones, while in the latter case, the complexity of our unison algorithm is better than that of previous solutions of the literature

    Survey of Distributed Decision

    Get PDF
    We survey the recent distributed computing literature on checking whether a given distributed system configuration satisfies a given boolean predicate, i.e., whether the configuration is legal or illegal w.r.t. that predicate. We consider classical distributed computing environments, including mostly synchronous fault-free network computing (LOCAL and CONGEST models), but also asynchronous crash-prone shared-memory computing (WAIT-FREE model), and mobile computing (FSYNC model)

    Self-* distributed query region covering in sensor networks

    Full text link
    Wireless distributed sensor networks are used to monitor a multitude of environments for both civil and military applications. Sensors may be deployed to unreachable or inhospitable areas. Thus, they cannot be replaced easily. However, due to various factors, sensors\u27 internal memory, or the sensors themselves, can become corrupted. Hence, there is a need for more robust sensor networks. Sensors are most commonly densely deployed, but keeping all sensors continually active is not energy efficient. Our aim is to select the minimum number of sensors which can entirely cover a particular monitored area, while remaining strongly connected. This concept is called a Minimum Connected Cover of a query region in a sensor network. In this research, we have designed two fully distributed, robust, self-* solutions to the minimum connected cover of query regions that can cope with both transient faults and sensor crashes. We considered the most general case in which every sensor has a different sensing and communication radius. We have also designed extended versions of the algorithms that use multi-hop information to obtain better results utilizing small atomicity (i.e., each sensor reads only one of its neighbors\u27 variables at a time, instead of reading all neighbors\u27 variables). With this, we have proven self-* (self-configuration, self-stabilization, and self-healing) properties of our solutions, both analytically and experimentally. The simulation results show that our solutions provide better performance in terms of coverage than pre-existing self-stabilizing algorithms
    corecore