7 research outputs found

    Optimal processor assignment for pipeline computations

    Get PDF
    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered

    Design and Analysis of Optical Interconnection Networks for Parallel Computation.

    Get PDF
    In this doctoral research, we propose several novel protocols and topologies for the interconnection of massively parallel processors. These new technologies achieve considerable improvements in system performance and structure simplicity. Currently, synchronous protocols are used in optical TDM buses. The major disadvantage of a synchronous protocol is the waste of packet slots. To offset this inherent drawback of synchronous TDM, a pipelined asynchronous TDM optical bus is proposed. The simulation results show that the performance of the proposed bus is significantly better than that of known pipelined synchronous TDM optical buses. Practically, the computation power of the plain TDM protocol is limited. Various extensions must be added to the system. In this research, a new pipelined optical TDM bus for implementing a linear array parallel computer architecture is proposed. The switches on the receiving segment of the bus can be dynamically controlled, which make the system highly reconfigurable. To build large and scalable systems, we need new network architectures that are suitable for optical interconnections. A new kind of reconfigurable bus called segmented bus is introduced to achieve reduced structure simplicity and increased concurrency. We show that parallel architectures based on segmented buses are versatile by showing that it can simulate parallel communication patterns supported by a wide variety of networks with small slowdown factors. New kinds of interconnection networks, the hypernetworks, have been proposed recently. Compared with point-to-point networks, they allow for increased resource-sharing and communication bandwidth utilization, and they are especially suitable for optical interconnects. One way to derive a hypernetwork is by finding the dual of a point-to-point network. Hypercube Q\sb{n}, where n is the dimension, is a very popular point-to-point network. It is interesting to construct hypernetworks from the dual Q\sbsp{n}{*} of hypercube of Q\sb{n}. In this research, the properties of Q\sbsp{n}{*} are investigated and a set of fundamental data communication algorithms for Q\sbsp{n}{*} are presented. The results indicate that the Q\sbsp{n}{*} hypernetwork is a useful and promising interconnection structure for high-performance parallel and distributed computing systems

    Classical and quantum sublinear algorithms

    Get PDF
    This thesis investigates the capabilities of classical and quantum sublinear algorithms through the lens of complexity theory. The formal classification of problems between “tractable” (by constructing efficient algorithms that solve them) and “intractable” (by proving no efficient algorithm can) is among the most fruitful lines of work in theoretical computer science, which includes, amongst an abundance of fundamental results and open problems, the notorious P vs. NP question. This particular incarnation of the decision-versus-verification question stems from a choice of computational model: polynomial-time Turing machines. It is far from the only model worthy of investigation, however; indeed, measuring time up to polynomial factors is often too “coarse” for practical applications. We focus on quantum computation, a more complete model of physically realisable computation where quantum mechanical phenomena (such as interference and entanglement) may be used as computational resources; and sublinear algorithms, a formalisation of ultra-fast computation where merely reading or storing the entire input is impractical, e.g., when processing massive datasets such as social networks or large databases. We begin our investigation by studying structural properties of local algorithms, a large class of sublinear algorithms that includes property testers and is characterised by the inability to even see most of the input. We prove that, in this setting, queries – the main complexity measure – can be replaced with random samples. Applying this transformation yields, among other results, the state-of-the-art query lower bound for relaxed local decoders. Focusing our attention onto property testers, we begin to chart the complexity�theoretic landscape arising from the classical vs. quantum and decision vs. verification questions in testing. We show that quantum hardware and communication with a powerful but untrusted prover are “orthogonal” resources, so that one cannot be substituted for the other. This implies all of the possible separations among the analogues of QMA, MA and BQP in the property-testing setting. We conclude with a study of zero-knowledge for (classical) streaming algorithms, which receive one-pass access to the entirety of their input but only have sublinear space. Inspired by cryptographic tools, we construct commitment protocols that are unconditionally secure in the streaming model and can be leveraged to obtain zero-knowledge streaming interactive proofs – and, in particular, show that zero-knowledge is achievable in this model

    Subject Index Volumes 1–200

    Get PDF

    Répartition automatique des tâches parallèles : application dans la simulation de réseaux électriques en temps réel

    Get PDF
    Répartition automatique des tâches parallèles pour la simulation en temps réel -- Modélisation et analyse du problème de la répartition des tâches -- Méthode heuristique de répartition des tâches temps réel -- Heuristiques de répartition et performances de la méthode
    corecore