24 research outputs found

    Self-Stabilization in the Distributed Systems of Finite State Machines

    Get PDF
    The notion of self-stabilization was first proposed by Dijkstra in 1974 in his classic paper. The paper defines a system as self-stabilizing if, starting at any, possibly illegitimate, state the system can automatically adjust itself to eventually converge to a legitimate state in finite amount of time and once in a legitimate state it will remain so unless it incurs a subsequent transient fault. Dijkstra limited his attention to a ring of finite-state machines and provided its solution for self-stabilization. In the years following his introduction, very few papers were published in this area. Once his proposal was recognized as a milestone in work on fault tolerance, the notion propagated among the researchers rapidly and many researchers in the distributed systems diverted their attention to it. The investigation and use of self-stabilization as an approach to fault-tolerant behavior under a model of transient failures for distributed systems is now undergoing a renaissance. A good number of works pertaining to self-stabilization in the distributed systems were proposed in the yesteryears most of which are very recent. This report surveys all previous works available in the literature of self-stabilizing systems

    Robust network computation

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 91-98).In this thesis, we present various models of distributed computation and algorithms for these models. The underlying theme is to come up with fast algorithms that can tolerate faults in the underlying network. We begin with the classical message-passing model of computation, surveying many known results. We give a new, universally optimal, edge-biconnectivity algorithm for the classical model. We also give a near-optimal sub-linear algorithm for identifying bridges, when all nodes are activated simultaneously. After discussing some ways in which the classical model is unrealistic, we survey known techniques for adapting the classical model to the real world. We describe a new balancing model of computation. The intent is that algorithms in this model should be automatically fault-tolerant. Existing algorithms that can be expressed in this model are discussed, including ones for clustering, maximum flow, and synchronization. We discuss the use of agents in our model, and give new agent-based algorithms for census and biconnectivity. Inspired by the balancing model, we look at two problems in more depth.(cont.) First, we give matching upper and lower bounds on the time complexity of the census algorithm, and we show how the census algorithm can be used to name nodes uniquely in a faulty network. Second, we consider using discrete harmonic functions as a computational tool. These functions are a natural exemplar of the balancing model. We prove new results concerning the stability and convergence of discrete harmonic functions, and describe a method which we call Eulerization for speeding up convergence.by David Pritchard.M.Eng

    High Performance Software Reconfiguration in the Context of Distributed Systems and Interconnection Networks.

    Get PDF
    Designed algorithms that are useful for developing protocols and supporting tools for fault tolerance, dynamic load balancing, and distributing monitoring in loosely coupled multi-processor systems. Four efficient algorithms are developed to learn network topology and reconfigure distributed application programs in execution using the available tools for replication and process migration. The first algorithm provides techniques for transparent software reconfiguration based on process migration in the context of quadtree embeddings in Hypercubes. Our novel approach provides efficient reconfiguration for some classes of faults that may be identified easily. We provide a theoretical characterization to use graph matching, quadratic assignment, and a variety of branch and bound techniques to recover from general faults at run-time and maintain load balance. The second algorithm provides distributed recognition of articulation points, biconnected components, and bridges. Since the removal of an articulation point disconnects the network, knowledge about it may be used for selective replication. We have obtained the most efficient distributed algorithms with linear message complexity for the recognition of these properties. The third algorithm is an optimal linear message complexity distributed solution for recognizing graph planarity which is one of the most celebrated problems in graph theory and algorithm design. Recently, efficient shortest path algorithms are developed for planar graphs whose efficient recognition itself was left open. Our algorithm also leads to designing efficient distributed algorithm to recognize outer-planar graphs with applications in Hamiltonian path, shortest path routing and graph coloring. It is shown that efficient routing of information and distributing the stack needed for for planarity testing permit local computations leading to an efficient distributed algorithm. The fourth algorithm provides software redundancy techniques to provide fault tolerance to program structures. We consider the problem of mapping replicated program structures to provide efficient communication between modules in multiple replicas. We have obtained an optimal mapping of 2-replicated binary trees into hypercubes. For replication numbers greater than two, we provide efficient heuristic simulation results to provide efficient support for both \u27N-version programming\u27 and \u27Recovery block\u27 approaches for software replication

    Finding 3-edge-connected components in parallel

    Get PDF
    A parallel algorithm for finding 3-edge-connected components of an undirected graph on a CRCW PRAM is presented. The time and work complexity of this algorithm is O(logn) and O((m+n)loglogn), respectively, where n is the number of vertices and m is the number of edges in the input graph. The algorithm is based on ear decomposition and reduction of 3-edge-connectivity to 1-vertex-connectivity. This is the first 3-edge-connected component algorithm of a parallel model

    A Distributed Algorithm for Finding Separation Pairs in a Computer Network

    Get PDF
    One of the main problems in graph theory is graph connectivity which is often studied for network reliability problems.It can be studied from two aspects, vertex-connectivity and edge-connectivity. Vertex connectivity is the smallest number of vertices whose deletion will cause a connected graph to be disconnected. We focus our work on finding separation pairs of a graph which is the set of pairs of vertices that deleting them would disconnect a graph. Finding separation pairs can be used in solving vertex-connectivity problem and finding the triconnected components of the graph. The algorithms presented during the past are non-linear or if linear, very complicated. This work is based on Tarjan and Hopcroft\u27s paper which uses Depth-First Search and finds the separation pairs in linear time. Our goal is to present an algorithm that finds the separation pairs in an asynchronous distributed computer network using distributed Depth-First search (DDFS)

    Improved I/O-efficient algorithms for solving graph connectivity, biconnectivity problems.

    Get PDF

    Topology-Awareness and Re-optimization Mechanism for Virtual Network Embedding

    Get PDF
    Embedding of virtual network (VN) requests on top of a shared physical network poses an intriguing combination of theoretical and practical challenges. Two major problems with the state-of-the-art VN embedding algorithms are their indifference to the underlying substrate topology and their lack of re-optimization mechanisms for already embedded VN requests. We argue that topology-aware embedding together with re-optimization mechanisms can improve the performance of the previous VN embedding algorithms in terms of acceptance ratio and load balancing. The major contributions of this thesis are twofold: (1) we present a mechanism to differentiate among resources based on their importance in the substrate topology, and (2) we propose a set of algorithms for re-optimizing and re-embedding initially-rejected VN requests after fixing their bottleneck requirements. Through extensive simulations, we show that not only our techniques improve the acceptance ratio, but they also provide the added benefit of balancing load better than previous proposals. The metrics we use to validate our techniques are improvement in acceptance ratio, revenue-cost ratio, incurred cost, and distribution of utilization

    Topology Control Multi-Objective Optimisation in Wireless Sensor Networks: Connectivity-Based Range Assignment and Node Deployment

    Get PDF
    The distinguishing characteristic that sets topology control apart from other methods, whose motivation is to achieve effects of energy minimisation and an increased network capacity, is its network-wide perspective. In other words, local choices made at the node-level always have the goal in mind of achieving a certain global, network-wide property, while not excluding the possibility for consideration of more localised factors. As such, our approach is marked by being a centralised computation of the available location-based data and its reduction to a set of non-homogeneous transmitting range assignments, which elicit a certain network-wide property constituted as a whole, namely, strong connectedness and/or biconnectedness. As a means to effect, we propose a variety of GA which by design is multi-morphic, where dependent upon model parameters that can be dynamically set by the user, the algorithm, acting accordingly upon either single or multiple objective functions in response. In either case, leveraging the unique faculty of GAs for finding multiple optimal solutions in a single pass. Wherefore it is up to the designer to select the singular solution which best meets requirements. By means of simulation, we endeavour to establish its relative performance against an optimisation typifying a standard topology control technique in the literature in terms of the proportion of time the network exhibited the property of strong connectedness. As to which, an analysis of the results indicates that such is highly sensitive to factors of: the effective maximum transmitting range, node density, and mobility scenario under observation. We derive an estimate of the optimal constitution thereof taking into account the specific conditions within the domain of application in that of a WSN, thereby concluding that only GA optimising for the biconnected components in a network achieves the stated objective of a sustained connected status throughout the duration.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format
    corecore