7 research outputs found

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem

    Self-* distributed query region covering in sensor networks

    Full text link
    Wireless distributed sensor networks are used to monitor a multitude of environments for both civil and military applications. Sensors may be deployed to unreachable or inhospitable areas. Thus, they cannot be replaced easily. However, due to various factors, sensors\u27 internal memory, or the sensors themselves, can become corrupted. Hence, there is a need for more robust sensor networks. Sensors are most commonly densely deployed, but keeping all sensors continually active is not energy efficient. Our aim is to select the minimum number of sensors which can entirely cover a particular monitored area, while remaining strongly connected. This concept is called a Minimum Connected Cover of a query region in a sensor network. In this research, we have designed two fully distributed, robust, self-* solutions to the minimum connected cover of query regions that can cope with both transient faults and sensor crashes. We considered the most general case in which every sensor has a different sensing and communication radius. We have also designed extended versions of the algorithms that use multi-hop information to obtain better results utilizing small atomicity (i.e., each sensor reads only one of its neighbors\u27 variables at a time, instead of reading all neighbors\u27 variables). With this, we have proven self-* (self-configuration, self-stabilization, and self-healing) properties of our solutions, both analytically and experimentally. The simulation results show that our solutions provide better performance in terms of coverage than pre-existing self-stabilizing algorithms

    Self-stabilizing group membership protocol

    Full text link
    In this thesis, we consider the problem of partitioning a network into groups of bounded diameter. Given a network of processes X and a constant D, the group partition problem is the problem of finding a D-partition of X, that is, a partition of X into disjoint connected subgraphs, which we call groups, each of diameter no greater than D. The minimal group partition problem is to find a D-partition {G1, ... Gm} of X such that no two groups can be combined; that is, for any Gi and Gj, where i ≠ j, either Gi U Gj is disconnected or Gi U Gj has diameter greater than D. In this thesis, a silent self-stabilizing asynchronous distributed algorithm is given for the minimal group partition problem in a network with unique IDs, using the composite model of computation. The algorithm is correct under the unfair daemon. It is known that finding a D-partition of minimum cardinality of a network is NP-complete. In the special case that X is the unit disk graph in the plane, the algorithm presented in this thesis is O(D)-competitive, that is, the number of groups in the partition constructed by the algorithm is O(D) times the number of groups in the minimum D-partition. Our method is to first construct a breadth-first search (BFS) tree for X, then find a maximal independent set (MIS) of X. Using the MIS and the BFS tree, an initialD-partition is constructed, after which groups are merged with adjacent groups until no more mergers are possible. The resulting D-partition is minimal

    Self-stabilizing k-clustering in mobile ad hoc networks

    Full text link
    In this thesis, two silent self-stabilizing asynchronous distributed algorithms are given for constructing a k-clustering of a connected network of processes. These are the first self-stabilizing solutions to this problem. One algorithm, FLOOD, takes O( k) time and uses O(k log n) space per process, while the second algorithm, BFS-MIS-CLSTR, takes O(n) time and uses O(log n) space; where n is the size of the network. Processes have unique IDs, and there is no designated leader. BFS-MIS-CLSTR solves three problems; it elects a leader and constructs a BFS tree for the network, constructs a minimal independent set, and finally a k-clustering. Finding a minimal k-clustering is known to be NP -hard. If the network is a unit disk graph in a plane, BFS-MIS-CLSTR is within a factor of O(7.2552k) of choosing the minimal number of clusters; A lower bound is given, showing that any comparison-based algorithm for the k-clustering problem that takes o( diam) rounds has very bad worst case performance; Keywords: BFS tree construction, K-clustering, leader election, MIS construction, self-stabilization, unit disk graph

    Distributed self-(star) minimum connected sensor cover algorithms

    Full text link
    Wireless ad-hoc sensor networks are composed of a large number of tiny sensors with embedded microprocessors, that have very limited resources and yet must coordinate amongst themselves to form a connected network. Every sensor has a certain sensing radius, Rs, within which it is capable of covering a particular region by detecting or gathering certain data. Every sensor also has a communication radius, R c, within which it is capable of sending or receiving data; Given a query over a sensor network, the minimum connected sensor cover problem is to select a minimum, or nearly minimum, set of sensors, called a minimum connected sensor cover, such that the selected sensors cover the query region, and form a connected network amongst themselves. In this thesis, we use present three fully distributed, strictly localized, scalable, self-* solutions to the minimum connected sensor cover problem

    Performance Evaluation of Self-stabilizing Algorithms by Probabilistic Model Checking

    Get PDF
    A self-stabilizing protocol is one that starting from any arbitrary initial state recovers to legitimate states in a finite number of steps, and once it stabilizes to a set of legitimate states, it remains there unless it is perturbed by transient faults. The traditional methods existing for performance evaluation of a self-stabilizing algorithm usually work based on the analysis of worst case computational complexity. Another method that has been commonly used in evaluating these algorithms is simulation, which assumes the system starts from an initial state. Here, it is argued that the traditional methods have shortcomings and do not give enough insight about the behavior of the system. Moreover, they do not provide a decent method of comparison. We propose a novel method for evaluation of self-stabilizing algorithms. This method works based on probabilistic model checking and computation of the expected number of recovery steps. We execute some experiments on the case studies, and the results indicate that we can gain insight about the faults and their structure in the protocol. Next, we explain the difficulty of designing a self-stabilizing algorithm for a system and show how it is impossible to do so for some classes of protocols. This resulted in some relaxation in the definition of self-stabilization. One of the relaxations made in the definition of self-stabilization is weak-stabilization. A weak-stabilizing protocol ensures the existence of a recovery path from an arbitrary initial configuration. Thus, some paths may contain connected components or cycles. Since a weak-stabilizing algorithm may get stuck in connected components forever, we cannot evaluate weak-stabilizing protocols by traditional and existing methods. We calculate the expected number of recovery steps for evaluating weak-stabilization. However, since it does not give us enough intuition about the structure of faults, we apply a graph-theoretic formula for estimating the weak-stabilizing algorithm's performance. This formula is based on the number of cycles and their reachability. Based on the observations we made by performance evaluation of these protocols, we suggest algorithms called state encoding for modifying the performance of the algorithms. State encoding works based on changing the bit mapping of the states of the system. The aim is to make the states with faster recovery steps more probable to occur. There are three algorithms, one of which works based on betweenness centrality which is a measure of centrality of a node within a graph. The other one works based on feedback arc set which is a set of arcs whose removal makes a graph acyclic. The third algorithm works based on the length of the shortest recovery path for the states. The other problem investigated here is the problem of state space explosion in model checking. Similar to traditional methods of model checking, probabilistic model checking also suffers from the problem of state space explosion, i.e., the number of states grows exponentially in terms of the number of components in the distributed system. Abstraction methods, which are described briefly here, are designed to combat this problem. We argue that they are not effcient enough, and there is still the lack of a suffcient abstraction method that works for systems with an arbitrary number of processes. We also propose a new approach for evaluation of an abstraction function. Then, based on the intuition gained, a new abstraction algorithm is proposed that is exclusively designed for verification of reachability properties. After executing experiments on a case study, we compare the result of our algorithm with the results obtained by existing methods. The results support our claim that our method is more effcient and precise
    corecore