694 research outputs found

    Mapping Finite Element Graphs on Hypercubes

    Get PDF
    In parallel computing, it is important to map a parallel program onto a parallel computer such that the total execution time of a parallel program is minimized. In general, a parallel program and a parallel computer can be represented by a task graph (TG) and a processor graph (PG), respectively. For a TG, nodes represent tasks of a parallel program and edges denote the data communication needed between tasks. The weights associated with nodes and edges represent the computational load and communication cost, respectively. For a PG, nodes and edges denote processors and communication channels, respectively. By using the graph model, the mapping problem becomes a task allocation problem. In the task allocation problem, we try to distribute the computational load of a parallel program to the processors of a parallel computer as evenly as possible (the load balance criterion (LBC) and minimize the communication cost of processors (the minimum communication cost criterion (MCCC)). The optimal assignment of tasks to processors in order to minimize the total execution time is known to be NP-complete [GaJo79]. This means that the optimal solution is intractable. Therefore, satisfactory suboptimal solutions are generally sought. In this paper, we will discuss how to map finite element graphs (FEGs) onto hypercubes. Our schemes are general and are applicable to a wide variety of PGs. The finite element method (FEM) is a widely used method for the structural modeling of physical system [LaPi83]. Due to the properties of compute-intensiveness and compute-locality, it is very attractive to implement this method on parallel computers [BeBo87] [Bokh81] [Jord78] [SaEr87]. The number of nodes in a FEG is usually greater than the number of processors in a parallel computer. It is important to partition a FEG into M modules such that the computational load of modules are equal and the communication cost among modules are minimized, where M is the number of processors of a parallel computer

    Mapping Finite Element Graphs on Hypercubes

    Get PDF
    In parallel computing, it is important to map a parallel program onto a parallel computer such that the total execution time of a parallel program is minimized. In general, a parallel program and a parallel computer can be represented by a task graph (TG) and a processor graph (PG), respectively. For a TG, nodes represent tasks of a parallel program and edges denote the data communication needed between tasks. The weights associated with nodes and edges represent the computational load and communication cost, respectively. For a PG, nodes and edges denote processors and communication channels, respectively. By using the graph model, the mapping problem becomes a task allocation problem. In the task allocation problem, we try to distribute the computational load of a parallel program to the processors of a parallel computer as evenly as possible (the load balance criterion (LBC) and minimize the communication cost of processors (the minimum communication cost criterion (MCCC)). The optimal assignment of tasks to processors in order to minimize the total execution time is known to be NP-complete [GaJo79]. This means that the optimal solution is intractable. Therefore, satisfactory suboptimal solutions are generally sought. In this paper, we will discuss how to map finite element graphs (FEGs) onto hypercubes. Our schemes are general and are applicable to a wide variety of PGs. The finite element method (FEM) is a widely used method for the structural modeling of physical system [LaPi83]. Due to the properties of compute-intensiveness and compute-locality, it is very attractive to implement this method on parallel computers [BeBo87] [Bokh81] [Jord78] [SaEr87]. The number of nodes in a FEG is usually greater than the number of processors in a parallel computer. It is important to partition a FEG into M modules such that the computational load of modules are equal and the communication cost among modules are minimized, where M is the number of processors of a parallel computer

    Mapping unstructured grid problems to the connection machine

    Get PDF
    We present a highly parallel graph mapping technique that enables one to solve unstructured grid problems on massively parallel computers. Many implicit and explicit methods for solving discretizated partial differential equations require each point in the discretization to exchange data with its neighboring points every time step or iteration. The time spent communicating can limit the high performance promised by massively parallel computing. To eliminate this bottleneck, we map the graph of the irregular problem to the graph representing the interconnection topology of the computer such that the sum of the distances that the messages travel is minimized. We show that, in comparison to a naive assignment of processors, our heuristic mapping algorithm significantly reduces the communication time on the Connection Machine, CM-2

    Compiling global name-space programs for distributed execution

    Get PDF
    Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented

    Nonlinear structural response using adaptive dynamic relaxation on a massively-parallel-processing system

    Get PDF
    A parallel adaptive dynamic relaxation (ADR) algorithm has been developed for nonlinear structural analysis. This algorithm has minimal memory requirements, is easily parallelizable and scalable to many processors, and is generally very reliable and efficient for highly nonlinear problems. Performance evaluations on single-processor computers have shown that the ADR algorithm is reliable and highly vectorizable, and that it is competitive with direct solution methods for the highly nonlinear problems considered. The present algorithm is implemented on the 512-processor Intel Touchstone DELTA system at Caltech, and it is designed to minimize the extent and frequency of interprocessor communication. The algorithm has been used to solve for the nonlinear static response of two and three dimensional hyperelastic systems involving contact. Impressive relative speedups have been achieved and demonstrate the high scalability of the ADR algorithm. For the class of problems addressed, the ADR algorithm represents a very promising approach for parallel-vector processing

    Rectilinear partitioning of irregular data parallel computations

    Get PDF
    New mapping algorithms for domain oriented data-parallel computations, where the workload is distributed irregularly throughout the domain, but exhibits localized communication patterns are described. Researchers consider the problem of partitioning the domain for parallel processing in such a way that the workload on the most heavily loaded processor is minimized, subject to the constraint that the partition be perfectly rectilinear. Rectilinear partitions are useful on architectures that have a fast local mesh network. Discussed here is an improved algorithm for finding the optimal partitioning in one dimension, new algorithms for partitioning in two dimensions, and optimal partitioning in three dimensions. The application of these algorithms to real problems are discussed

    Automating Topology Aware Mapping for Supercomputers

    Get PDF
    Petascale machines with hundreds of thousands of cores are being built. These machines have varying interconnect topologies and large network diameters. Computation is cheap and communication on the network is becoming the bottleneck for scaling of parallel applications. Network contention, specifically, is becoming an increasingly important factor affecting overall performance. The broad goal of this dissertation is performance optimization of parallel applications through reduction of network contention. Most parallel applications have a certain communication topology. Mapping of tasks in a parallel application based on their communication graph, to the physical processors on a machine can potentially lead to performance improvements. Mapping of the communication graph for an application on to the interconnect topology of a machine while trying to localize communication is the research problem under consideration. The farther different messages travel on the network, greater is the chance of resource sharing between messages. This can create contention on the network for networks commonly used today. Evaluative studies in this dissertation show that on IBM Blue Gene and Cray XT machines, message latencies can be severely affected under contention. Realizing this fact, application developers have started paying attention to the mapping of tasks to physical processors to minimize contention. Placement of communicating tasks on nearby physical processors can minimize the distance traveled by messages and reduce the chances of contention. Performance improvements through topology aware placement for applications such as NAMD and OpenAtom are used to motivate this work. Building on these ideas, the dissertation proposes algorithms and techniques for automatic mapping of parallel applications to relieve the application developers of this burden. The effect of contention on message latencies is studied in depth to guide the design of mapping algorithms. The hop-bytes metric is proposed for the evaluation of mapping algorithms as a better metric than the previously used maximum dilation metric. The main focus of this dissertation is on developing topology aware mapping algorithms for parallel applications with regular and irregular communication patterns. The automatic mapping framework is a suite of such algorithms with capabilities to choose the best mapping for a problem with a given communication graph. The dissertation also briefly discusses completely distributed mapping techniques which will be imperative for machines of the future.published or submitted for publicationnot peer reviewe
    corecore