2 research outputs found

    Parallel programming environment for OpenMP

    Get PDF
    We present our effort to provide a comprehensive parallel programming environment for the OpenMP parallel directive language. This environment includes a parallel programming methodology for the OpenMP programming model and a set of tools ( Ursa Minor and InterPol) that support this methodology. Our toolset provides automated and interactive assistance to parallel programmers in time-consuming tasks of the proposed methodology. The features provided by our tools include performance and program structure visualization, interactive optimization, support for performance modeling, and performance advising for finding and correcting performance problems. The presented evaluation demonstrates that our environment offers significant support in general parallel tuning efforts and that the toolset facilitates many common tasks in OpenMP parallel programming in an efficient manner

    Are parallel workstations the right target for parallelizing compilers

    No full text
    Abstract. The growing popularity of multiprocessor workstations among general users calls for a more easy-to-understand approach to parallel programming. Providing standard, sequential languages with automatic translation tools would enable a seamless transition from uniprocessors to multiprocessor workstations. In this paper we study the success and limitations of such an approach. To this end, we have retargeted the Polaris parallelizing compiler at a 4-processor Sun SPARCstation 20 and measured the performance of parallel programs. Here, we present the results from six of the Perfect Benchmark programs along with our analysis of the performance and some of the issues brought up during the experiments. Our research will help answer some of the questions that have been posed by both users and manufacturers concerning the practicality and desirable characteristics of parallel programming in a workstation environment.
    corecore