42 research outputs found

    Subcube embeddability and fault tolerance of augmented hypercubes

    Full text link
    Hypercube networks have received much attention from both parallel processing and communications areas over the years since they offer a rich interconnection structure with high bandwidth, logarithmic diameter, and high degree of fault tolerance. They are easily partitionable and exhibit a high degree of fault tolerance. Fault-tolerance in hypercube and hypercube-based networks received the attention of several researchers in recent years; The primary idea of this study is to address and analyze the reliability issues in hypercube networks. It is well known that the hypercube can be augmented with one dimension to replace any of the existing dimensions should any dimension fail. In this research, it is shown that it is possible to add i dimensions to the standard hypercube, Qn to tolerate (i - 1) dimension failures, where 0 \u3c i ≤ n. An augmented hypercube, Qn +(n) with n additional dimensions is introduced and compared with two other hypercube networks with the same amount of redundancy. Reliability analysis for the three hypercube networks is done using the combinatorial and Markov modeling. The MTTF values are calculated and compared for all three networks. Comparison between similar size hypercube networks show that the augmented hypercube is more robust than the standard hypercube; As a related problem, we also look at the subcube embeddability. Subcube embeddability of the hypercube can be enhanced by introducing an additional dimension. A set of new dimensions, characterized by the Hamming distance between the pairs of nodes it connects, is introduced using a measure defined as the magnitude of a dimension. An enumeration of subcubes of various sizes is presented for a dimension parameterized by its magnitude. It is shown that the maximum number of subcubes for a Qn can only be attained when the magnitude of dimension is n - 1 or n. It is further shown that the latter two dimensions can optimally increase the number of subcubes among all possible choices

    Processor allocation strategies for modified hypercubes

    Get PDF
    Parallel processing has been widely accepted to be the future in high speed computing. Among the various parallel architectures proposed/implemented, the hypercube has shown a lot of promise because of its poweful properties, like regular topology, fault tolerance, low diameter, simple routing, and ability to efficiently emulate other architectures. The major drawback of the hypercube network is that it can not be expanded in practice because the number of communication ports for each processor grows as the logarithm of the total number of processors in the system. Therefore, once a hypercube supercomputer of a certain dimensionality has been built, any future expansions can be accomplished only by replacing the VLSI chips. This is an undesirable feature and a lot of work has been under progress to eliminate this stymie, thus providing a platform for easier expansion. Modified hypercubes (MHs) have been proposed as the building blocks of hypercube-based systems supporting incremental growth techniques without introducing extra resources for individual hypercubes. However, processor allocation on MHs proves to be a challenge due to a slight deviation in their topology from that of the standard hypercube network. This thesis addresses the issue of processor allocation on MHs and proposes various strategies which are based, partially or entirely, on table look-up approaches. A study of the various task allocation strategies for standard hypercubes is conducted and their suitability for MHs is evaluated. It is shown that the proposed strategies have a perfect subcube recognition ability and a superior performance. Existing processor allocation strategies for pure hypercube networks are demonstrated to be ineffective for MHs, in the light of their inability to recognize all available subcubes. A comparative analysis that involves the buddy strategy and the new strategies is carried out using simulation results

    Adaptive fault-tolerant routing in hypercube multicomputers

    Get PDF
    A connected hypercube with faulty links and/or nodes is called an injured hypercube. To enable any non-faulty node to communicate with any other non-faulty node in an injured hypercube, the information on component failures has to be made available to non-faulty nodes so as to route messages around the faulty components. A distributed adaptive fault tolerant routing scheme is proposed for an injured hypercube in which each node is required to know only the condition of its own links. Despite its simplicity, this scheme is shown to be capable of routing messages successfully in an injured hypercube as long as the number of faulty components is less than n. Moreover, it is proved that this scheme routes messages via shortest paths with a rather high probabiltiy and the expected length of a resulting path is very close to that of a shortest path. Since the assumption that the number of faulty components is less than n in an n-dimensional hypercube might limit the usefulness of the above scheme, a routing scheme is introduced based on depth-first search which works in the presence of an arbitrary number of faulty components. Due to the insufficient information on faulty components, the paths chosen by the above scheme may not always be the shortest. To guarantee that all messages be routed via shortest paths, it is proposed that every mode be equipped with more information than that on its own links. The effects of this additional information on routing efficiency are analyzed, and the additional information to be kept at each node for the shortest path routing is determined. Several examples and remarks are also given to illustrate the results

    Parallel Architectures for Planetary Exploration Requirements (PAPER)

    Get PDF
    The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified

    Reliability Analysis of the Hypercube Architecture.

    Get PDF
    This dissertation presents improved techniques for analyzing network-connected (NCF), 2-connected (2CF), task-based (TBF), and subcube (SF) functionality measures in a hypercube multiprocessor with faulty processing elements (PE) and/or communication elements (CE). These measures help study system-level fault tolerance issues and relate to various application modes in the hypercube. Solutions discussed in the text fall into probabilistic and deterministic models. The probabilistic measure assumes a stochastic graph of the hypercube where PE\u27s and/or CE\u27s may fail with certain probabilities, while the deterministic model considers that some system components are already failed and aims to determine the system functionality. For probabilistic model, MIL-HDBK-217F is used to predict PE and CE failure rates for an Intel iPSC system. First, a technique called CAREL is presented. A proof of its correctness is included in an appendix. Using the shelling ordering concept, CAREL is shown to solve the exact probabilistic NCF measure for a hypercube in time polynomial in the number of spanning trees. However, this number increases exponentially in the hypercube dimension. This dissertation, then, aims to more efficiently obtain lower and upper bounds on the measures. Algorithms, presented in the text, generate tighter bounds than had been obtained previously and run in time polynomial in the cube dimension. The proposed algorithms for probabilistic 2CF measure consider PE and/or CE failures. In attempting to evaluate deterministic measures, a hybrid method for fault tolerant broadcasting in the hypercube is proposed. This method combines the favorable features of redundant and non-redundant techniques. A generalized result on the deterministic TBF measure for the hypercube is then described. Two distributed algorithms are proposed to identify the largest operational subcubes in a hypercube C\sb{n} with faulty PE\u27s. Method 1, called LOS1, requires a list of faulty components and utilizes the CMB operator of CAREL to solve the problem. In case the number of unavailable nodes (faulty or busy) increases, an alternative distributed approach, called LOS2, processes m available nodes in O(mn) time. The proposed techniques are simple and efficient

    Some Theoretical Results of Hypercube for Parallel Architecture

    Get PDF
    This paper surveys some theoretical results of the hypercube for design of VLSI architecture. The parallel computer including the hypercube multiprocessor will become a leading technology that supports efficient computation for large uncertain systems

    Optimal Subcube Allocation in a Circuit-Switched Faulty Hypercube

    Full text link

    Fault-tolerance embedding of rings and arrays in star and pancake graphs

    Full text link
    The star and pancake graphs are useful interconnection networks for connecting processors in a parallel and distributed computing environment. The star network has been widely studied and is shown to possess attactive features like sublogarithmic diameter, node and edge symmetry and high resilience. The star/pancake interconnection graphs, {dollar}S\sb{n}/P\sb{n}{dollar} of dimension n have n! nodes connected by {dollar}{(n-1).n!\over2}{dollar} edges. Due to their large number of nodes and interconnections, they are prone to failure of one or more nodes/edges; In this thesis, we present methods to embed Hamiltonian paths (H-path) and Hamiltonian cycles (H-cycle) in a star graph {dollar}S\sb{n}{dollar} and pancake graph {dollar}P\sb{n}{dollar} in a faulty environment. Such embeddings are important for solving computational problems, formulated for array and ring topologies, on star and pancake graphs. The models considered include single-processor failure, double-processor failure, and multiple-processor failures. All the models are applied to an H-cycle which is formed by visiting all the ({dollar}{n!\over4!})\ S\sb4/P\sb4{dollar}s in an {dollar}S\sb{n}/P\sb{n}{dollar} in a particular order. Each {dollar}S\sb4/P\sb4{dollar} has an entry node where the cycle/path enters that particular {dollar}S\sb4/P\sb4{dollar} and an exit node where the path leaves it. Distributed algorithms for embedding hamiltonian cycle in the presence of multiple faults, are also presented for both {dollar}S\sb{n}{dollar} and {dollar}P\sb{n}{dollar}
    corecore