15 research outputs found

    ARTICLE NO. PC971367 A Library-Based Approach to Task Parallelism in a Data-Parallel Language

    Get PDF
    Pure data-parallel languages such as High Performance Fortran version 1 (HPF) do not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/ MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compile

    Study Abroad and Developing Reflective Research Practice Through Blogs A Preliminary Study from the United Kingdom

    Get PDF
    Blogs are seen as an important strand of social networking and a significant way of disseminating research ideas and sharing knowledge and perceptions with new audiences via digital platforms. The use of blogs within off-campus activities, such as study abroad field visits, have the potential to enhance students’ social media skills and confidence about becoming active researchers in public through communicating field research experiences and reflections on what they see, learn, hear and do. Via a semi-structured questionnaire administered to UK based university students participating in a recent Criminology program field visit to Slovenia in Europe, we assess the extent to which blogging facilitates student reflective practice on their lived experiences of undertaking research in culturally unfamiliar environments. We show that blogging combined with the whole experience of international fieldwork has a ‘learning gain’ for students exemplified through a willingness to engage in reflective practice, self-awareness and transferable skills

    MPI as a Coordination Layer for Communicating HPF Tasks

    Get PDF
    Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, we propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; we show how these ambiguities can be resolved by describing one possible HPF binding for MPI. We then present the design of a li..

    Double Standards: Bringing Task Parallelism to HPF via the Message Passing Interface

    No full text
    High Performance Fortran (HPF) does not allow efficient expression of mixed task/dataparallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how a coordination library implementing the Message Passing Interface (MPI) can be used to represent these common parallel program structures. This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, incre..

    A Library-Based Approach to Task Parallelism in a Data-Parallel Language

    Get PDF
    The data-parallel language High Performance Fortran (HPF) does not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents..

    ARTICLE NO. PC971367 A Library-Based Approach to Task Parallelism in a Data-Parallel Language

    No full text
    Pure data-parallel languages such as High Performance Fortran version 1 (HPF) do not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/ MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compile

    Double Standards: Bringing Task Parallelism to HPF Via the Message Passing Interface

    No full text
    High Performance Fortran (HPF) does not allow efficient expression of mixed task/dataparallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how a coordination library implementing the Message Passing Interface (MPI) can be used to represent these common parallel program structures. This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from twodimensional FFT, convolution, and multiblock programs demonstrate that the HPF/MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, incre..

    A Data Transfer Library for Communicating Data-Parallel Tasks

    No full text
    Many computations can be structured as sets of communicating data-parallel tasks. Individual tasks may be coded in HPF, pC++, etc.; periodically, tasks exchange distributed arrays via channel operations, virtual file operations, message passing, etc. The implementation of these operations is complicated by the fact that the processes engaging in the communication may execute on different numbers of processors and may have different distributions for communicated data structures. In addition, they may be connected by different sorts of networks. In this paper, we describe a communicating data-parallel tasks (CDT) library that we are developing for constructing applications of this sort. We outline the techniques used to implement this library, and we describe a range of data transfer strategies and several algorithms based on these strategies. We also present performance results for several algorithms. The CDT library is being used as a compiler target for an HPF compiler augmented with..
    corecore