125 research outputs found
Chain-based scheduling: Part I - loop transformations and code generation
Chain-based scheduling [1] is an efficient partitioning and scheduling scheme for nested loops on distributed-memory multicomputers. The idea is to take advantage of the regular data dependence structure of a nested loop to overlap and pipeline the communication and computation. Most partitioning and scheduling algorithms proposed for nested loops on multicomputers [1,2,3] are graph algorithms on the iteration space of the nested loop. The graph algorithms for partitioning and scheduling are too expensive (at least O(N), where N is the total number of iterations) to be implemented in parallelizing compilers. Graph algorithms also need large data structures to store the result of the partitioning and scheduling. In this paper, we propose compiler loop transformations and the code generation to generate chain-based parallel codes for nested loops on multicomputers. The cost of the loop transformations is O(nd), where n is the number of nesting loops and d is the number of data dependences. Both n and d are very small in real programs. The loop transformations and code generation for chain-based partitioning and scheduling enable parallelizing compilers to generate parallel codes which contain all partitioning and scheduling information that the parallel processors need at run time
Compilation techniques for multicomputers
This thesis considers problems in process and data partitioning when compiling
programs for distributed-memory parallel computers (or multicomputers). These
partitions may be specified by the user through the use of language constructs,
or automatically determined by the compiler.
Data and process partitioning techniques are developed for two models of
compilation. The first compilation model focusses on the loop nests present in a
serial program. Executing the iterations of these loop nests in parallel accounts for
a significant amount of the parallelism which can be exploited in these programs.
The parallelism is exploited by applying a set of transformations to the loop
nests. The iterations of the transformed loop nests are in a form which can be
readily distributed amongst the processors of a multicomputer. The manner in
which the arrays, referenced within these loop nests, are partitioned between the
processors is determined by the distribution of the loop iterations. The second
compilation model is based on the data parallel paradigm, in which operations
are applied to many different data items collectively. High Performance Fortran
is used as an example of this paradigm.
Novel collective communication routines are developed, and are applied to
provide the communication associated with the data partitions for both compilation
models. Furthermore, it is shown that by using these routines the
communication associated with partitioning data on a multicomputer is greatly
simplified. These routines are developed as part of this thesis.
The experimental context for this thesis is the development of a compiler for
the Fujitsu AP1000 multicomputer. A prototype compiler is presented. Experimental
results for a variety of applications are included
Compiler Techniques for Optimizing Communication and Data Distribution for Distributed-Memory Computers
Advanced Research Projects Agency (ARPA)National Aeronautics and Space AdministrationOpe
N–Dimensional Orthogonal Tile Sizing Problem
AMS subject classification: 68Q22, 90C90We discuss in this paper the problem of generating highly efficient code when a
n + 1-dimensional nested loop program is executed on a n-dimensional torus/grid
of distributed-memory general-purpose machines. We focus on a class of uniform
recurrences with non-negative components of the dependency matrix. Using tiling
the iteration space strategy we show that minimizing the total running time reduces
to solving a non-trivial non-linear integer optimization problem. For the later we
present a mathematical framework that enables us to derive an O(n log n) algorithm
for finding a good approximate solution. The theoretical evaluations and the experimental results show that the obtained solution approximates the original minimum
sufficiently well in the context of the considered problem. Such algorithm is realtime usable for very large values of n and can be used as optimization techniques in
parallelizing compilers as well as in performance tuning of parallel codes by hand
Beyond shared memory loop parallelism in the polyhedral model
2013 Spring.Includes bibliographical references.With the introduction of multi-core processors, motivated by power and energy concerns, parallel processing has become main-stream. Parallel programming is much more difficult due to its non-deterministic nature, and because of parallel programming bugs that arise from non-determinacy. One solution is automatic parallelization, where it is entirely up to the compiler to efficiently parallelize sequential programs. However, automatic parallelization is very difficult, and only a handful of successful techniques are available, even after decades of research. Automatic parallelization for distributed memory architectures is even more problematic in that it requires explicit handling of data partitioning and communication. Since data must be partitioned among multiple nodes that do not share memory, the original memory allocation of sequential programs cannot be directly used. One of the main contributions of this dissertation is the development of techniques for generating distributed memory parallel code with parametric tiling. Our approach builds on important contributions to the polyhedral model, a mathematical framework for reasoning about program transformations. We show that many affine control programs can be uniformized only with simple techniques. Being able to assume uniform dependences significantly simplifies distributed memory code generation, and also enables parametric tiling. Our approach implemented in the AlphaZ system, a system for prototyping analyses, transformations, and code generators in the polyhedral model. The key features of AlphaZ are memory re-allocation, and explicit representation of reductions. We evaluate our approach on a collection of polyhedral kernels from the PolyBench suite, and show that our approach scales as well as PLuTo, a state-of-the-art shared memory automatic parallelizer using the polyhedral model. Automatic parallelization is only one approach to dealing with the non-deterministic nature of parallel programming that leaves the difficulty entirely to the compiler. Another approach is to develop novel parallel programming languages. These languages, such as X10, aim to provide highly productive parallel programming environment by including parallelism into the language design. However, even in these languages, parallel bugs remain to be an important issue that hinders programmer productivity. Another contribution of this dissertation is to extend the array dataflow analysis to handle a subset of X10 programs. We apply the result of dataflow analysis to statically guarantee determinism. Providing static guarantees can significantly increase programmer productivity by catching questionable implementations at compile-time, or even while programming
Compiling global name-space programs for distributed execution
Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented
Global partitioning of parallel loops and data arrays for caches and distributed memory in multiprocessors
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 49-50).by Rajeev K. Barua.M.S
Compiling Fortran 90D/HPF for distributed memory MIMD computers
This paper describes the design of the Fortran90D/HPF compiler, a source-to-source parallel compiler for distributed memory systems being developed at Syracuse University. Fortran 90D/HPF is a data parallel language with special directives to specify data alignment and distributions. A systematic methodology to process distribution directives of Fortran 90D/HPF is presented. Furthermore, techniques for data and computation partitioning, communication detection and generation, and the run-time support for the compiler are discussed. Finally, initial performance results for the compiler are presented. We believe that the methodology to process data distribution, computation partitioning, communication system design and the overall compiler design can be used by the implementors of compilers for HPF
Compiler optimization to improve data locality for processor multithreading
Over the last decade processor speed has increased dramatically, whereas the speed of the memory subsystem improved at a modest rate. Due to the increase in the cache miss latency (in terms of the processor cycle), processors stall on cache misses for a significant portion of its execution time. Multithreaded processors has been proposed in the literature to reduce the processor stall time due to cache misses. Although multithreading improves processor utilization, it may also increase cache miss rates, because in a multithreaded processor multiple threads share the same cache, which effectively reduces the cache size available to each individual thread. Increased processor utilization and the increase in the cache miss rate demands higher memory bandwidth. A novel compiler optimization method has been presented in this paper that improves data locality for each of the threads and enhances data sharing among the threads. The method is based on loop transformation theory and optimizes both spatial and temporal data locality. The created threads exhibit high level of intra-thread and inter-thread data locality which effectively reduces both the data cache miss rates and the total execution time of numerically intensive computation running on a multithreaded processor
- …