960 research outputs found

    Redundancy management for efficient fault recovery in NASA's distributed computing system

    Get PDF
    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance

    Stochastic Analysis of a Churn-Tolerant Structured Peer-to-Peer Scheme

    Full text link
    We present and analyze a simple and general scheme to build a churn (fault)-tolerant structured Peer-to-Peer (P2P) network. Our scheme shows how to "convert" a static network into a dynamic distributed hash table(DHT)-based P2P network such that all the good properties of the static network are guaranteed with high probability (w.h.p). Applying our scheme to a cube-connected cycles network, for example, yields a O(logN)O(\log N) degree connected network, in which every search succeeds in O(logN)O(\log N) hops w.h.p., using O(logN)O(\log N) messages, where NN is the expected stable network size. Our scheme has an constant storage overhead (the number of nodes responsible for servicing a data item) and an O(logN)O(\log N) overhead (messages and time) per insertion and essentially no overhead for deletions. All these bounds are essentially optimal. While DHT schemes with similar guarantees are already known in the literature, this work is new in the following aspects: (1) It presents a rigorous mathematical analysis of the scheme under a general stochastic model of churn and shows the above guarantees; (2) The theoretical analysis is complemented by a simulation-based analysis that validates the asymptotic bounds even in moderately sized networks and also studies performance under changing stable network size; (3) The presented scheme seems especially suitable for maintaining dynamic structures under churn efficiently. In particular, we show that a spanning tree of low diameter can be efficiently maintained in constant time and logarithmic number of messages per insertion or deletion w.h.p. Keywords: P2P Network, DHT Scheme, Churn, Dynamic Spanning Tree, Stochastic Analysis

    Parallel Architectures for Planetary Exploration Requirements (PAPER)

    Get PDF
    The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified

    New Fault Tolerant Multicast Routing Techniques to Enhance Distributed-Memory Systems Performance

    Get PDF
    Distributed-memory systems are a key to achieve high performance computing and the most favorable architectures used in advanced research problems. Mesh connected multicomputer are one of the most popular architectures that have been implemented in many distributed-memory systems. These systems must support communication operations efficiently to achieve good performance. The wormhole switching technique has been widely used in design of distributed-memory systems in which the packet is divided into small flits. Also, the multicast communication has been widely used in distributed-memory systems which is one source node sends the same message to several destination nodes. Fault tolerance refers to the ability of the system to operate correctly in the presence of faults. Development of fault tolerant multicast routing algorithms in 2D mesh networks is an important issue. This dissertation presents, new fault tolerant multicast routing algorithms for distributed-memory systems performance using wormhole routed 2D mesh. These algorithms are described for fault tolerant routing in 2D mesh networks, but it can also be extended to other topologies. These algorithms are a combination of a unicast-based multicast algorithm and tree-based multicast algorithms. These algorithms works effectively for the most commonly encountered faults in mesh networks, f-rings, f-chains and concave fault regions. It is shown that the proposed routing algorithms are effective even in the presence of a large number of fault regions and large size of fault region. These algorithms are proved to be deadlock-free. Also, the problem of fault regions overlap is solved. Four essential performance metrics in mesh networks will be considered and calculated; also these algorithms are a limited-global-information-based multicasting which is a compromise of local-information-based approach and global-information-based approach. Data mining is used to validate the results and to enlarge the sample. The proposed new multicast routing techniques are used to enhance the performance of distributed-memory systems. Simulation results are presented to demonstrate the efficiency of the proposed algorithms

    Architectural Considerations for a Self-Configuring Routing Scheme for Spontaneous Networks

    Get PDF
    Decoupling the permanent identifier of a node from the node's topology-dependent address is a promising approach toward completely scalable self-organizing networks. A group of proposals that have adopted such an approach use the same structure to: address nodes, perform routing, and implement location service. In this way, the consistency of the routing protocol relies on the coherent sharing of the addressing space among all nodes in the network. Such proposals use a logical tree-like structure where routes in this space correspond to routes in the physical level. The advantage of tree-like spaces is that it allows for simple address assignment and management. Nevertheless, it has low route selection flexibility, which results in low routing performance and poor resilience to failures. In this paper, we propose to increase the number of paths using incomplete hypercubes. The design of more complex structures, like multi-dimensional Cartesian spaces, improves the resilience and routing performance due to the flexibility in route selection. We present a framework for using hypercubes to implement indirect routing. This framework allows to give a solution adapted to the dynamics of the network, providing a proactive and reactive routing protocols, our major contributions. We show that, contrary to traditional approaches, our proposal supports more dynamic networks and is more robust to node failures

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    Subcube embeddability and fault tolerance of augmented hypercubes

    Full text link
    Hypercube networks have received much attention from both parallel processing and communications areas over the years since they offer a rich interconnection structure with high bandwidth, logarithmic diameter, and high degree of fault tolerance. They are easily partitionable and exhibit a high degree of fault tolerance. Fault-tolerance in hypercube and hypercube-based networks received the attention of several researchers in recent years; The primary idea of this study is to address and analyze the reliability issues in hypercube networks. It is well known that the hypercube can be augmented with one dimension to replace any of the existing dimensions should any dimension fail. In this research, it is shown that it is possible to add i dimensions to the standard hypercube, Qn to tolerate (i - 1) dimension failures, where 0 \u3c i ≤ n. An augmented hypercube, Qn +(n) with n additional dimensions is introduced and compared with two other hypercube networks with the same amount of redundancy. Reliability analysis for the three hypercube networks is done using the combinatorial and Markov modeling. The MTTF values are calculated and compared for all three networks. Comparison between similar size hypercube networks show that the augmented hypercube is more robust than the standard hypercube; As a related problem, we also look at the subcube embeddability. Subcube embeddability of the hypercube can be enhanced by introducing an additional dimension. A set of new dimensions, characterized by the Hamming distance between the pairs of nodes it connects, is introduced using a measure defined as the magnitude of a dimension. An enumeration of subcubes of various sizes is presented for a dimension parameterized by its magnitude. It is shown that the maximum number of subcubes for a Qn can only be attained when the magnitude of dimension is n - 1 or n. It is further shown that the latter two dimensions can optimally increase the number of subcubes among all possible choices
    corecore