116 research outputs found

    Subcube embeddability and fault tolerance of augmented hypercubes

    Full text link
    Hypercube networks have received much attention from both parallel processing and communications areas over the years since they offer a rich interconnection structure with high bandwidth, logarithmic diameter, and high degree of fault tolerance. They are easily partitionable and exhibit a high degree of fault tolerance. Fault-tolerance in hypercube and hypercube-based networks received the attention of several researchers in recent years; The primary idea of this study is to address and analyze the reliability issues in hypercube networks. It is well known that the hypercube can be augmented with one dimension to replace any of the existing dimensions should any dimension fail. In this research, it is shown that it is possible to add i dimensions to the standard hypercube, Qn to tolerate (i - 1) dimension failures, where 0 \u3c i ≤ n. An augmented hypercube, Qn +(n) with n additional dimensions is introduced and compared with two other hypercube networks with the same amount of redundancy. Reliability analysis for the three hypercube networks is done using the combinatorial and Markov modeling. The MTTF values are calculated and compared for all three networks. Comparison between similar size hypercube networks show that the augmented hypercube is more robust than the standard hypercube; As a related problem, we also look at the subcube embeddability. Subcube embeddability of the hypercube can be enhanced by introducing an additional dimension. A set of new dimensions, characterized by the Hamming distance between the pairs of nodes it connects, is introduced using a measure defined as the magnitude of a dimension. An enumeration of subcubes of various sizes is presented for a dimension parameterized by its magnitude. It is shown that the maximum number of subcubes for a Qn can only be attained when the magnitude of dimension is n - 1 or n. It is further shown that the latter two dimensions can optimally increase the number of subcubes among all possible choices

    Processor allocation strategies for modified hypercubes

    Get PDF
    Parallel processing has been widely accepted to be the future in high speed computing. Among the various parallel architectures proposed/implemented, the hypercube has shown a lot of promise because of its poweful properties, like regular topology, fault tolerance, low diameter, simple routing, and ability to efficiently emulate other architectures. The major drawback of the hypercube network is that it can not be expanded in practice because the number of communication ports for each processor grows as the logarithm of the total number of processors in the system. Therefore, once a hypercube supercomputer of a certain dimensionality has been built, any future expansions can be accomplished only by replacing the VLSI chips. This is an undesirable feature and a lot of work has been under progress to eliminate this stymie, thus providing a platform for easier expansion. Modified hypercubes (MHs) have been proposed as the building blocks of hypercube-based systems supporting incremental growth techniques without introducing extra resources for individual hypercubes. However, processor allocation on MHs proves to be a challenge due to a slight deviation in their topology from that of the standard hypercube network. This thesis addresses the issue of processor allocation on MHs and proposes various strategies which are based, partially or entirely, on table look-up approaches. A study of the various task allocation strategies for standard hypercubes is conducted and their suitability for MHs is evaluated. It is shown that the proposed strategies have a perfect subcube recognition ability and a superior performance. Existing processor allocation strategies for pure hypercube networks are demonstrated to be ineffective for MHs, in the light of their inability to recognize all available subcubes. A comparative analysis that involves the buddy strategy and the new strategies is carried out using simulation results

    Isomorphic Strategy for Processor Allocation in k-Ary n-Cube Systems

    Get PDF
    Due to its topological generality and flexibility, the k-ary n-cube architecture has been actively researched for various applications. However, the processor allocation problem has not been adequately addressed for the k-ary n-cube architecture, even though it has been studied extensively for hypercubes and meshes. The earlier k-ary n-cube allocation schemes based on conventional slice partitioning suffer from internal fragmentation of processors. In contrast, algorithms based on job-based partitioning alleviate the fragmentation problem but require higher time complexity. This paper proposes a new allocation scheme based on isomorphic partitioning, where the processor space is partitioned into higher dimensional isomorphic subcubes. The proposed scheme minimizes the fragmentation problem and is general in the sense that any size request can be supported and the host architecture need not be isomorphic. Extensive simulation study reveals that the proposed scheme significantly outperforms earlier schemes in terms of mean response time for practical size k-ary and n-cube architectures. The simulation results also show that reduction of external fragmentation is more substantial than internal fragmentation with the proposed scheme

    Isomorphic Strategy for Processor Allocation in k-Ary n-Cube Systems

    Get PDF
    Due to its topological generality and flexibility, the k-ary n-cube architecture has been actively researched for various applications. However, the processor allocation problem has not been adequately addressed for the k-ary n-cube architecture, even though it has been studied extensively for hypercubes and meshes. The earlier k-ary n-cube allocation schemes based on conventional slice partitioning suffer from internal fragmentation of processors. In contrast, algorithms based on job-based partitioning alleviate the fragmentation problem but require higher time complexity. This paper proposes a new allocation scheme based on isomorphic partitioning, where the processor space is partitioned into higher dimensional isomorphic subcubes. The proposed scheme minimizes the fragmentation problem and is general in the sense that any size request can be supported and the host architecture need not be isomorphic. Extensive simulation study reveals that the proposed scheme significantly outperforms earlier schemes in terms of mean response time for practical size k-ary and n-cube architectures. The simulation results also show that reduction of external fragmentation is more substantial than internal fragmentation with the proposed scheme

    Some Theoretical Results of Hypercube for Parallel Architecture

    Get PDF
    This paper surveys some theoretical results of the hypercube for design of VLSI architecture. The parallel computer including the hypercube multiprocessor will become a leading technology that supports efficient computation for large uncertain systems

    Processor allocation for partitionable multiprocessor systems

    Get PDF
    The processor allocation problem in an n-dimensional hypercube multipro-cessor is similar to the conventional memory allocation problem. The main objective is to maximize the utilization of available resources as well as minimize the inherent system fragmentation. In this thesis, a new processor allocation strategy is proposed, and compared with the existing strategies, such as, the Buddy strategy, the Single Gray Code strategy (SGC), the Multiple Gray Code (MGC), and the Maximal Set of Subcubes (MSS). We will show that our proposed processor allocation strategy outperforms the existing strategies, by having the advantage of being able to allocate unused processors to other jobs/algorithms

    Processor allocation in k-ary n-cube multiprocessors

    Get PDF
    [[abstract]]Composed of various topologies, the k-ary n-cube system is desirable for accepting and executing topologically different tasks. We propose a new allocation strategy to utilize the large amount of processor resources in the k-ary n-cubes. Our strategy is an extension of the TC strategy on hypercubes and is able to recognize all subcubes with different topologies. Simulation results show that with such full subcube recognition ability and no internal fragmentation, our strategy depicts constantly better performance than other strategies, such as the Free list strategy on k-ary n-cubes and the Sniffing strategy[[sponsorship]]台灣大學[[conferencetype]]國際[[conferencedate]]19971218~19971220[[conferencelocation]]臺北市, 臺

    A General Framework for Complex Time-Driven Simulations on Hypercubes

    Get PDF
    We describe a general framework for building and running complex time-driven simulations with several levels of concurrency. The framework has been implemented on the Caltech/JPL Mark IIIfp hypercube using the Centaur communications protocol. Our framework allows the programmer to break the hypercube up into one or more subcubes of arbitrary size (task parallelism). Each subcube runs a separate application using data parallelism and synchronous communications internal to the subcube. Communications between subcubes are performed with asynchronous messages. Subcubes can each define their own parameters and commands which drive their particular application. These are collected and organized by the Control Processor (CP) in order that the entire simulation can be driven from a single command-driven shell. This system allows several programmers to develop disjoint pieces of a large simulation in parallel and to then integrate them with little effort. Each programmer is, of course, also able to take advantage of the separate data and I/O processors on each hypercube node in order to overlap calculation and communication (on-board parallelism) as well as the pipelined floating point processor on each node (pipelined processor parallelism). We show, as an example of the framework, a large space defense simulation. Functions (sensing, tracking, etc.) each comprise a subcube; functions are collected into defense platforms (satellites); and many platforms comprise the defense architecture. Software in the CP uses simple input to determine the node allocation to each function based on the desired defense architecture and number of platforms simulated in the hypercube. This allows many different architectures to be simulated. The set of simulated platforms, the results, and the messages between them are shown on color graphics displays. The methods used herein can be generalized to other simulations of a similar nature in a straightforward manner

    Reliability Analysis of the Hypercube Architecture.

    Get PDF
    This dissertation presents improved techniques for analyzing network-connected (NCF), 2-connected (2CF), task-based (TBF), and subcube (SF) functionality measures in a hypercube multiprocessor with faulty processing elements (PE) and/or communication elements (CE). These measures help study system-level fault tolerance issues and relate to various application modes in the hypercube. Solutions discussed in the text fall into probabilistic and deterministic models. The probabilistic measure assumes a stochastic graph of the hypercube where PE\u27s and/or CE\u27s may fail with certain probabilities, while the deterministic model considers that some system components are already failed and aims to determine the system functionality. For probabilistic model, MIL-HDBK-217F is used to predict PE and CE failure rates for an Intel iPSC system. First, a technique called CAREL is presented. A proof of its correctness is included in an appendix. Using the shelling ordering concept, CAREL is shown to solve the exact probabilistic NCF measure for a hypercube in time polynomial in the number of spanning trees. However, this number increases exponentially in the hypercube dimension. This dissertation, then, aims to more efficiently obtain lower and upper bounds on the measures. Algorithms, presented in the text, generate tighter bounds than had been obtained previously and run in time polynomial in the cube dimension. The proposed algorithms for probabilistic 2CF measure consider PE and/or CE failures. In attempting to evaluate deterministic measures, a hybrid method for fault tolerant broadcasting in the hypercube is proposed. This method combines the favorable features of redundant and non-redundant techniques. A generalized result on the deterministic TBF measure for the hypercube is then described. Two distributed algorithms are proposed to identify the largest operational subcubes in a hypercube C\sb{n} with faulty PE\u27s. Method 1, called LOS1, requires a list of faulty components and utilizes the CMB operator of CAREL to solve the problem. In case the number of unavailable nodes (faulty or busy) increases, an alternative distributed approach, called LOS2, processes m available nodes in O(mn) time. The proposed techniques are simple and efficient
    corecore