67 research outputs found
Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning
Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods
The PARSE Programming Paradigm. Part I: Software Development Methodology. Part II: Software Development Support Tools
The programming methodology of PARSE (parallel software environment), a software environment being developed for reconfigurable non-shared memory parallel computers, is described. This environment will consist of an integrated collection of language interfaces, automatic and semi-automatic debugging and analysis tools, and operating system —all of which are made more flexible by the use of a knowledge-based implementation for the tools that make up PARSE. The programming paradigm supports the user freely choosing among three basic approaches /abstractions for programming a parallel machine: logic-based descriptive, sequential-control procedural, and parallel-control procedural programming. All of these result in efficient parallel execution. The current work discusses the methodology underlying PARSE, whereas the companion paper, “The PARSE Programming Paradigm — II: Software Development Support Tools,” details each of the component tools
Compilation techniques for irregular problems on parallel machines
Massively parallel computers have ushered in the era of teraflop computing. Even though large and powerful machines are being built, they are used by only a fraction of the computing community. The fundamental reason for this situation is that parallel machines are difficult to program. Development of compilers that automatically parallelize programs will greatly increase the use of these machines.;A large class of scientific problems can be categorized as irregular computations. In this class of computation, the data access patterns are known only at runtime, creating significant difficulties for a parallelizing compiler to generate efficient parallel codes. Some compilers with very limited abilities to parallelize simple irregular computations exist, but the methods used by these compilers fail for any non-trivial applications code.;This research presents development of compiler transformation techniques that can be used to effectively parallelize an important class of irregular programs. A central aim of these transformation techniques is to generate codes that aggressively prefetch data. Program slicing methods are used as a part of the code generation process. In this approach, a program written in a data-parallel language, such as HPF, is transformed so that it can be executed on a distributed memory machine. An efficient compiler runtime support system has been developed that performs data movement and software caching
Efficient Machine-Independent Programming of High-Performance Multiprocessors
Parallel computing is regarded by most computer scientists as the most
likely approach for significantly improving computing power for scientists
and engineers. Advances in programming languages and parallelizing
compilers are making parallel computers easier to use by providing
a high-level portable programming model that protects software
investment. However, experience has shown that simply finding
parallelism is not always sufficient for obtaining good performance
from today's multiprocessors. The goal of this project is to develop
advanced compiler analysis of data and computation decompositions,
thread placement, communication, synchronization, and memory system
effects needed in order to take advantage of performance-critical
elements in modern parallel architectures
Interprocedural Compilation of Irregular Applications for Distributed Memory Machines
Data parallel languages like High Performance Fortran (HPF) are emerging
as the architecture independent mode of programming distributed memory
parallel machines. In this paper, we present the interprocedural
optimizations required for compiling applications having irregular data
access patterns, when coded in such data parallel languages. We have
developed an Interprocedural Partial Redundancy Elimination (IPRE)
algorithm for optimized placement of runtime preprocessing routine and
collective communication routines inserted for managing communication in
such codes. We also present three new interprocedural optimizations:
placement of scatter routines, deletion of data structures and use of
coalescing and incremental routines. We then describe how program slicing
can be used for further applying IPRE in more complex scenarios. We have
done a preliminary implementation of the schemes presented here using the
Fortran D compilation system as the necessary infrastructure. We present
experimental results from two codes compiled using our system to
demonstrate the efficacy of the presented schemes.
(Also cross-referenced as UMIACS-TR-95-43
Compiler Techniques for Optimizing Communication and Data Distribution for Distributed-Memory Computers
Advanced Research Projects Agency (ARPA)National Aeronautics and Space AdministrationOpe
Optimal Compilation of HPF Remappings
International audienceApplications with varying array access patterns require to dynamically change array mappings on distributed-memory parallel machines. HPF (High Performance Fortran) provides such remappings, on data that can be replicated, explicitly through therealign andredistribute directives and implicitly at procedure calls and returns. However such features are left out of the HPF subset or of the currently discussed hpf kernel for effeciency reasons. This paper presents a new compilation technique to handle hpf remappings for message-passing parallel architectures. The first phase is global and removes all useless remappings that appear naturally in procedures. The code generated by the second phase takes advantage of replications to shorten the remapping time. It is proved optimal: A minimal number of messages, containing only the required data, is sent over the network. The technique is fully implemented in HPFC, our prototype HPF compiler. Experiments were performed on a Dec Alpha farm
Distributed Memory Compiler Methods for Irregular Problems -- Data Copy Reuse and Runtime Partitioning
This paper outlines two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on an iPSC/860 to demonstrate the usefulness of our methods
An integrated runtime and compile-time approach for parallelizing structured and block structured applications
Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library
Interprocedural Partial Redundancy Elimination and its Application to Distributed Memory Compilation
Partial Redundancy Elimination (PRE) is a general scheme for suppressing
partial redundancies which encompasses traditional optimizations like loop
invariant code motion and redundant code elimination. In this paper we
address the problem of performing this optimization interprocedurally. We
use interprocedural partial redundancy elimination for placement of
communication and communication preprocessing statements while compiling
for distributed memory parallel machines.
(Also cross-referenced as UMIACS-TR-95-42
- …