Article thumbnail

MPI as a coordination layer for communicating HPF tasks

By I.T. Foster, D.R. Jr. Kohr, R. Krishnaiyer and A. Choudhary


Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, the authors propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; they show how these ambiguities can be resolved by describing one possible HPF binding for MPI. They then present the design of a library that implements this binding, discuss issues that influenced the design decisions, and evaluate the performance of a prototype HPF/MPI library using a communications microbenchmark and application kernel. Finally, they discuss how MPI features might be incorporated into the design framework

Topics: Task Scheduling, Programming Languages, Implementation, 99 Mathematics, Computers, Information Science, Management, Law, Miscellaneous, Parallel Processing, Performance, Data Transmission
Publisher: Argonne National Laboratory
Year: 1996
OAI identifier:
Provided by: UNT Digital Library
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.