66 research outputs found

    Parallel programming environment for OpenMP

    Get PDF
    We present our effort to provide a comprehensive parallel programming environment for the OpenMP parallel directive language. This environment includes a parallel programming methodology for the OpenMP programming model and a set of tools ( Ursa Minor and InterPol) that support this methodology. Our toolset provides automated and interactive assistance to parallel programmers in time-consuming tasks of the proposed methodology. The features provided by our tools include performance and program structure visualization, interactive optimization, support for performance modeling, and performance advising for finding and correcting performance problems. The presented evaluation demonstrates that our environment offers significant support in general parallel tuning efforts and that the toolset facilitates many common tasks in OpenMP parallel programming in an efficient manner

    A Comparative Study of National Infrastructures for Digital (Open) Educational Resources in Higher Education

    Get PDF
    This paper reports on the first stage of an international comparative study for the project “Digital educational architectures: Open learning resources in distributed learning infrastructures–EduArc”, funded by the German Federal Ministry of Education and Research. This study reviews the situation of digital educational resources (or (O)ER) framed within the digital transformation of ten different Higher Education (HE) systems (Australia, Canada, China, Germany, Japan, South Africa, South Korea, Spain, Turkey and the United States). Following a comparative case study approach, we investigated issues related to the existence of policies, quality assurance mechanisms and measures for the promotion of change in supporting infrastructure development for (O)ER at the national level in HE in the different countries. The results of this mainly documentary research highlight differences and similarities, which are largely due to variations in these countries’ political structure organisation. The discussion and conclusion point at the importance of understanding each country’s context and culture, in order to understand the differences between them, as well as the challenges they face

    Dynamic Observation of Dendritic Quasicrystal Growth upon Laser-Induced Solid-State Transformation

    Get PDF
    We report the laser-induced solid-state transformation between a periodic “approximant” and quasicrystal in the Al-Cr system during rapid quenching. Dynamic transmission electron microscopy allows us to capture in situ the dendritic growth of the metastable quasicrystals. The formation of dendrites during solid-state transformation is a rare phenomenon, which we attribute to the structural similarity between the two intermetallics. Through ab initio molecular dynamics simulations, we identify the dominant structural motif to be a 13-atom icosahedral cluster transcending the phases of matter

    Compiling for the New Generation of High-Performance SMPs

    No full text
    Shared-Memory Parallel computers (SMPs) have become a major force in the market of parallel high-performance computing. Parallelizing compilers have the potential to exploit SMPs efficiently while supporting the familiar sequential programming model. In recent work we have demonstrated that Polaris is one of the most powerful translators, approaching this goal. Although shared memory machines provide one of the easier models for parallel programming, the lack of standardization for expressing parallelism on these machines makes it difficult to write efficient portable code. In this paper we will report on a new effort to retarget the Polaris compiler at a range of new SMP machines through a portable directive language, Guide TM , in an attempt to provide a solution to this problem. We will discuss issues in compiling with this language and the performance obtained on two machines based on a number of significant application programs. 1 Introduction Shared-memory multiprocessors (SMP..

    On the Machine-independent Target Language for Parallelizing Compilers

    No full text
    Although shared memory machines provide one of the easier models for parallel programming, the lack of standardization for expressing parallelism on these machines makes it difficult to write efficient portable code. The Guide TM Programming System is one solution to this problem. In this paper, we discuss a back-end to the Polaris parallelizing compiler that generates Guide TM directives. We then compare the performance of parallel programs expressed in this way to programs automatically parallelized by a machine's native compiler, and by code expressing parallelism with native directives. The resulting performance is presented and the feasibility of this directive set as a portable parallel language is discussed. 1 Introduction Shared memory machines provide one of the easier conceptual models for parallel programming. Although this makes programming these machines relatively straight forward, there has been a lack of standardization of expressing parallelism. In moving from one..

    Are parallel workstations the right target for parallelizing compilers

    No full text
    Abstract. The growing popularity of multiprocessor workstations among general users calls for a more easy-to-understand approach to parallel programming. Providing standard, sequential languages with automatic translation tools would enable a seamless transition from uniprocessors to multiprocessor workstations. In this paper we study the success and limitations of such an approach. To this end, we have retargeted the Polaris parallelizing compiler at a 4-processor Sun SPARCstation 20 and measured the performance of parallel programs. Here, we present the results from six of the Perfect Benchmark programs along with our analysis of the performance and some of the issues brought up during the experiments. Our research will help answer some of the questions that have been posed by both users and manufacturers concerning the practicality and desirable characteristics of parallel programming in a workstation environment.
    • 

    corecore