88 research outputs found

    Reliability Analysis of the Hypercube Architecture.

    Get PDF
    This dissertation presents improved techniques for analyzing network-connected (NCF), 2-connected (2CF), task-based (TBF), and subcube (SF) functionality measures in a hypercube multiprocessor with faulty processing elements (PE) and/or communication elements (CE). These measures help study system-level fault tolerance issues and relate to various application modes in the hypercube. Solutions discussed in the text fall into probabilistic and deterministic models. The probabilistic measure assumes a stochastic graph of the hypercube where PE\u27s and/or CE\u27s may fail with certain probabilities, while the deterministic model considers that some system components are already failed and aims to determine the system functionality. For probabilistic model, MIL-HDBK-217F is used to predict PE and CE failure rates for an Intel iPSC system. First, a technique called CAREL is presented. A proof of its correctness is included in an appendix. Using the shelling ordering concept, CAREL is shown to solve the exact probabilistic NCF measure for a hypercube in time polynomial in the number of spanning trees. However, this number increases exponentially in the hypercube dimension. This dissertation, then, aims to more efficiently obtain lower and upper bounds on the measures. Algorithms, presented in the text, generate tighter bounds than had been obtained previously and run in time polynomial in the cube dimension. The proposed algorithms for probabilistic 2CF measure consider PE and/or CE failures. In attempting to evaluate deterministic measures, a hybrid method for fault tolerant broadcasting in the hypercube is proposed. This method combines the favorable features of redundant and non-redundant techniques. A generalized result on the deterministic TBF measure for the hypercube is then described. Two distributed algorithms are proposed to identify the largest operational subcubes in a hypercube C\sb{n} with faulty PE\u27s. Method 1, called LOS1, requires a list of faulty components and utilizes the CMB operator of CAREL to solve the problem. In case the number of unavailable nodes (faulty or busy) increases, an alternative distributed approach, called LOS2, processes m available nodes in O(mn) time. The proposed techniques are simple and efficient

    Faulty-Tolerant Algorithm for Mapping a Complete Binary Tree in an IEH

    Get PDF
    [[abstract]]Different parallel architectures may require different algorithms to make the existent algorithms on one architecture be easily transformed to or implemented on another architecture. This paper proposes a novel algorithm for embedding complete binary trees in a faulty Incrementally Extensible Hypercube (IEH). Furthermore, to obtain the replaceable node of the faulty node, 2-expansion is permitted such that up to (n+1) faults can be tolerated with dilation 3, congestion 1 and load 1. The presented embedding methods are optimized mainly for balancing the processor loads, while minimizing dilation and congestion as far as possible. According to the result, we can map the parallel algorithms developed by the structure of complete binary tree in an IEH. These methods of reconfiguring enable extremely high-speed parallel computation.[[notice]]補正完畢[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]GR

    Simulation of Meshes in a Faulty Supercube with Unbounded Expansion

    Get PDF
    [[abstract]]Reconfiguring meshes in a faulty Supercube is investigated in the paper. The result can readily be used in the optimal embedding of a mesh (or a torus) of processors in a faulty Supercube with unbounded expansion. There are embedding algorithms proposed in this paper. These embedding algorithms show a mesh with any number of nodes can be embedded into a faulty Supercube with load 1, congestion 1, and dilation 3 such that O(n2-w2) faults can be tolerated, where n is the dimension of the Supercube and 2w is the number of nodes of the mesh. The meshes and hypercubes are widely used interconnection architectures in parallel computing, grid computing, sensor network, and cloud computing. In addition, the Supercubes are superior to hypercube in terms of embedding a mesh and torus under faults. Therefore, we can easily port the parallel or distributed algorithms developed for these structuring of mesh and torus to the Supercube.[[notice]]補正完畢[[journaltype]]國外[[incitationindex]]EI[[ispeerreviewed]]Y[[booktype]]紙本[[countrycodes]]KO

    Interconnection Networks Embeddings and Efficient Parallel Computations.

    Get PDF
    To obtain a greater performance, many processors are allowed to cooperate to solve a single problem. These processors communicate via an interconnection network or a bus. The most essential function of the underlying interconnection network is the efficient interchanging of messages between processes in different processors. Parallel machines based on the hypercube topology have gained a great respect in parallel computation because of its many attractive properties. Many versions of the hypercube have been introduced by many researchers mainly to enhance communications. The twisted hypercube is one of the most attractive versions of the hypercube. It preserves the important features of the hypercube and reduces its diameter by a factor of two. This dissertation investigates relations and transformations between various interconnection networks and the twisted hypercube and explore its efficiency in parallel computation. The capability of the twisted hypercube to simulate complete binary trees, complete quad trees, and rings is demonstrated and compared with the hypercube. Finally, the fault-tolerance of the twisted hypercube is investigated. We present optimal algorithms to simulate rings in a faulty twisted hypercube environment and compare that with the hypercube

    Compact routing in fault-tolerant distributed systems

    Full text link
    A compact routing algorithm is a routing algorithm which reduces the space complexity of all-pairs shortest path routing. Compact routing protocols in distributed systems have been studied extensively as an attractive alternative to the traditional method of all-pairs shortest path routing. The use of compact routing protocols have several advantages. Compact routing schemes are not only more memory-efficient, but provide faster routing table lookup, more efficient broadcast scheme, and allow for a more scalable network. These routing schemes still maintain optimal or near-optimal routing paths. However, most of the compact routing protocols are not fault-tolerant. This thesis will first report the recent developments in the compact routing research. Several new methods for compact routing in fault-tolerant distributed systems will be presented and analyzed. The most important feature of the algorithms presented in this thesis is that they are self-stabilizing. The self-stabilization paradigm has been shown to be the most unified and all-inclusive approach to the design of fault-tolerant system. Additionally, these algorithms will address and solve several problems left unsolved by previous works. Relabelable and non-relabelable networks will be considered for both specific and arbitrary topologies

    One-to-many node-disjoint paths in (n,k)-star graphs

    Get PDF
    We present an algorithm which given a source node and a set of n−1 target nodes in the (n,k)-star graph Sn,k, where all nodes are distinct, builds a collection of n−1 node-disjoint paths, one from each target node to the source. The collection of paths output from the algorithm is such that each path has length at most 6k−7, and the algorithm has time complexity O(k2n2)

    On one-way cellular automata with a fixed number of cells

    Get PDF
    We investigate a restricted one-way cellular automaton (OCA) model where the number of cells is bounded by a constant number k, so-called kC-OCAs. In contrast to the general model, the generative capacity of the restricted model is reduced to the set of regular languages. A kC-OCA can be algorithmically converted to a deterministic finite automaton (DFA). The blow-up in the number of states is bounded by a polynomial of degree k. We can exhibit a family of unary languages which shows that this upper bound is tight in order of magnitude. We then study upper and lower bounds for the trade-off when converting DFAs to kC-OCAs. We show that there are regular languages where the use of kC-OCAs cannot reduce the number of states when compared to DFAs. We then investigate trade-offs between kC-OCAs with different numbers of cells and finally treat the problem of minimizing a given kC-OCA

    The Parallel Persistent Memory Model

    Full text link
    We consider a parallel computational model that consists of PP processors, each with a fast local ephemeral memory of limited size, and sharing a large persistent memory. The model allows for each processor to fault with bounded probability, and possibly restart. On faulting all processor state and local ephemeral memory are lost, but the persistent memory remains. This model is motivated by upcoming non-volatile memories that are as fast as existing random access memory, are accessible at the granularity of cache lines, and have the capability of surviving power outages. It is further motivated by the observation that in large parallel systems, failure of processors and their caches is not unusual. Within the model we develop a framework for developing locality efficient parallel algorithms that are resilient to failures. There are several challenges, including the need to recover from failures, the desire to do this in an asynchronous setting (i.e., not blocking other processors when one fails), and the need for synchronization primitives that are robust to failures. We describe approaches to solve these challenges based on breaking computations into what we call capsules, which have certain properties, and developing a work-stealing scheduler that functions properly within the context of failures. The scheduler guarantees a time bound of O(W/PA+D(P/PA)log1/fW)O(W/P_A + D(P/P_A) \lceil\log_{1/f} W\rceil) in expectation, where WW and DD are the work and depth of the computation (in the absence of failures), PAP_A is the average number of processors available during the computation, and f1/2f \le 1/2 is the probability that a capsule fails. Within the model and using the proposed methods, we develop efficient algorithms for parallel sorting and other primitives.Comment: This paper is the full version of a paper at SPAA 2018 with the same nam

    Deterministic Computations on a PRAM with Static Processor and Memory Faults.

    Get PDF
    We consider Parallel Random Access Machine (PRAM) which has some processors and memory cells faulty. The faults considered are static, i.e., once the machine starts to operate, the operational/faulty status of PRAM components does not change. We develop a deterministic simulation of a fully operational PRAM on a similar faulty machine which has constant fractions of faults among processors and memory cells. The simulating PRAM has nn processors and mm memory cells, and simulates a PRAM with nn processors and a constant fraction of mm memory cells. The simulation is in two phases: it starts with preprocessing, which is followed by the simulation proper performed in a step-by-step fashion. Preprocessing is performed in time O((mn+logn)logn)O((\frac{m}{n}+ \log n)\log n). The slowdown of a step-by-step part of the simulation is O(logm)O(\log m)
    corecore