109 research outputs found

    Auto-scoping for OpenMP Tasks

    Full text link

    Safe Parallelism: Compiler Analysis Techniques for Ada and OpenMP

    Get PDF
    There is a growing need to support parallel computation in Ada to cope with the performance requirements of the most advanced functionalities of safety-critical systems. In that regard, the use of parallel programming models is paramount to exploit the benefits of parallelism. Recent works motivate the use of OpenMP for being a de facto standard in high-performance computing for programming shared memory architectures. These works address two important aspects towards the introduction of OpenMP in Ada: the compatibility of the OpenMP syntax with the Ada language, and the interoperability of the OpenMP and the Ada runtimes, demonstrating that OpenMP complements and supports the structured parallelism approach of the tasklet model. This paper addresses a third fundamental aspect: functional safety from a compiler perspective. Particularly, it focuses on race conditions and considers the fine-grain and unstructured capabilities of OpenMP. Hereof, this paper presents a new compiler analysis technique that: (1) identifies potential race conditions in parallel Ada programs based on OpenMP or Ada tasks or both, and (2) provides solutions for the detected races.This work was supported by the Spanish Ministry of Science and Innovation under contract TIN2015-65316-P, and by the FCT (Portuguese Foundation for Science and Technology) within the CISTER Research Unit (CEC/04234).Peer ReviewedPostprint (author's final draft

    A Functional Safety OpenMP∗ for Critical Real-Time Embedded Systems

    Get PDF
    OpenMP* has recently gained attention in the embedded domain by virtue of the augmentations implemented in the last specification. Yet, the language has a minimal impact in the embedded real-time domain mostly due to the lack of reliability and resiliency mechanisms. As a result, functional safety properties cannot be guaranteed. This paper analyses in detail the latest specification to determine whether and how the compliant OpenMP implementations can guarantee functional safety. Given the conclusions drawn from the analysis, the paper describes a set of modifications to the specification, and a set of requirements for compiler and runtime systems to qualify for safety critical environments. Through the proposed solution, OpenMP can be used in critical real-time embedded systems without compromising functional safety.This work was funded by the EU project P-SOCRATES (FP7-ICT-2013- 10) and the Spanish Ministry of Science and Innovation under contract TIN2015- 65316-P.Peer ReviewedPostprint (author's final draft

    Parallel programming issues and what the compiler can do to help

    Get PDF
    Twenty-first century parallel programming models are becoming real complex due to the diversity of architectures they need to target (Multi- and Many-cores, GPUs, FPGAs, etc.). What if we could use one programming model to rule them all, one programming model to find them, one programming model to bring them all and in the darkness bind them, in the land of MareNostrum where the Applications lie. OmpSs programming model is an attempt to do so, by means of compiler directives. Compilers are essential tools to exploit applications and the architectures the run on. In this sense, compiler analysis and optimization techniques have been widely studied, in order to produce better performing and less consuming codes. In this paper we present two uses of several analyses we have implemented in the Mercurium[3] source-to-source compiler: a) the first use is to help users with correctness hints regarding the usage of the OpenMP and OmpSs tasks; b) the second use is to be able to execute OpenMP in embedded systems, with very little memory, thanks to calculating the Task Dependency Graph of the application at compile time. We also present the next steps of our work: a) extending range analysis for analyzing OpenMP and OmpSs recursive applications, and b) modeling applications using OmpSs and future OpenMP4.1 tasks priorities feature

    Parallel programming issues and what the compiler can do to help

    Get PDF
    Twenty-first century parallel programming models are becoming real complex due to the diversity of architectures they need to target (Multi- and Many-cores, GPUs, FPGAs, etc.). What if we could use one programming model to rule them all, one programming model to find them, one programming model to bring them all and in the darkness bind them, in the land of MareNostrum where the Applications lie. OmpSs programming model is an attempt to do so, by means of compiler directives. Compilers are essential tools to exploit applications and the architectures the run on. In this sense, compiler analysis and optimization techniques have been widely studied, in order to produce better performing and less consuming codes. In this paper we present two uses of several analyses we have implemented in the Mercurium[3] source-to-source compiler: a) the first use is to help users with correctness hints regarding the usage of the OpenMP and OmpSs tasks; b) the second use is to be able to execute OpenMP in embedded systems, with very little memory, thanks to calculating the Task Dependency Graph of the application at compile time. We also present the next steps of our work: a) extending range analysis for analyzing OpenMP and OmpSs recursive applications, and b) modeling applications using OmpSs and future OpenMP4.1 tasks priorities feature

    Compiler Analysis and its application to OmpSs

    Get PDF
    Nowadays, productivity is the buzzword in any computer science area. Several metrics have been defined in order to measure the productivity in any type of system. Some of the most important are the performance, the programmability, the cost or the power usage. From architects to programmers, the improvement of the productivity has became an important aspect of any development. Programming models play an important role in this topic. Thanks to the expressiveness of any high level representation not specified for any particular architecture, and the extra level of abstraction they contribute against specific programming languages, programming models aim to be a cornerstone in the enhancement of the productivity. OmpSs is a programming model developed at the Barcelona Supercomputing Center, built on the top of the Mercurium compiler and the Nanos++ runtime library, which aims to exploit task level parallelism and heterogeneous architectures. This model covers many productivity aspects such as the programmability, defining easy directives that can be integrated in sequential codes avoiding the need of restructuring the originals to get parallelism, and the performance, allowing the use of these directives to give support to multiple architectures and support for asynchronous parallelism. Nonetheless, not only the convenient design of a programming model and the use of a powerful architecture can help in the achievement of good productivity.Compilers are crucial in the communication between these two components in computers. They are meant to exploit both the underlying architectures and the programmers codes. In order to do that, analysis and optimizations are the techniques that can procure better transformations. Therefore, we have focused our work in the enhancement of the productivity of OmpSs by means of implementing a set of high level analysis and optimizations in the Mercurium compiler. They address two directions: obtain better performance by improving the code generation and improve the programmability of the programming model relieving the programmer of some tedious and error-prone tasks. Since Mercurium is a source-to-source compiler, we have applied these analyses in a high level representation and they are important because they are architecture independent and, thereupon, they can be useful for any target device in the back-end transformations

    Extending OmpSs-2 with flexible task-based array reductions

    Get PDF
    Reductions are a well-known computational pattern found in scientific applications that needs efficient parallelisation mechanisms. In this thesis we present a flexible scheme for computing reductions of arrays in the context of OmpSs-2, a task-based programming model similar to OpenMP

    Enabling Ada and OpenMP runtimes interoperability through template-based execution

    Get PDF
    The growing trend to support parallel computation to enable the performance gains of the recent hardware architectures is increasingly present in more conservative domains, such as safety-critical systems. Applications such as autonomous driving require levels of performance only achievable by fully leveraging the potential parallelism in these architectures. To address this requirement, the Ada language, designed for safety and robustness, is considering to support parallel features in the next revision of the standard (Ada 202X). Recent works have motivated the use of OpenMP, a de facto standard in high-performance computing, to enable parallelism in Ada, showing the compatibility of the two models, and proposing static analysis to enhance reliability. This paper summarizes these previous efforts towards the integration of OpenMP into Ada to exploit its benefits in terms of portability, programmability and performance, while providing the safety benefits of Ada in terms of correctness. The paper extends those works proposing and evaluating an application transformation that enables the OpenMP and the Ada runtimes to operate (under certain restrictions) as they were integrated. The objective is to allow Ada programmers to (naturally) experiment and evaluate the benefits of parallelizing concurrent Ada tasks with OpenMP while ensuring the compliance with both specifications.This work was supported by the Spanish Ministry of Science and Innovation under contract TIN2015-65316-P, by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreements no. 611016 and No 780622, and by the FCT (Portuguese Foundation for Science and Technology) within the CISTER Research Unit (CEC/04234).Peer ReviewedPostprint (published version

    UPIR: Toward the Design of Unified Parallel Intermediate Representation for Parallel Programming Models

    Full text link
    The complexity of heterogeneous computing architectures, as well as the demand for productive and portable parallel application development, have driven the evolution of parallel programming models to become more comprehensive and complex than before. Enhancing the conventional compilation technologies and software infrastructure to be parallelism-aware has become one of the main goals of recent compiler development. In this paper, we propose the design of unified parallel intermediate representation (UPIR) for multiple parallel programming models and for enabling unified compiler transformation for the models. UPIR specifies three commonly used parallelism patterns (SPMD, data and task parallelism), data attributes and explicit data movement and memory management, and synchronization operations used in parallel programming. We demonstrate UPIR via a prototype implementation in the ROSE compiler for unifying IR for both OpenMP and OpenACC and in both C/C++ and Fortran, for unifying the transformation that lowers both OpenMP and OpenACC code to LLVM runtime, and for exporting UPIR to LLVM MLIR dialect.Comment: Typos corrected. Format update

    OpenMP tasking model for Ada: safety and correctness

    Get PDF
    22nd International Conference on Reliable Software Technologies (Ada-Europe 2017). 12 to 16, Jun, 2017. Vienna, Austria.The safety-critical real-time embedded domain increasingly demands the use of parallel architectures to fulfill performance requirements. Such architectures require the use of parallel programming models to exploit the underlying parallelism. This paper evaluates the applicability of using OpenMP, a widespread parallel programming model, with Ada, a language widely used in the safety-critical domain. Concretely, this paper shows that applying the OpenMP tasking model to exploit fine-grained parallelism within Ada tasks does not impact on programs safeness and correctness, which is vital in the environments where Ada is mostly used. Moreover, we compare the OpenMP tasking model with the proposal of Ada extensions to define parallel blocks, parallel loops and reductions. Overall, we conclude that the OpenMP tasking model can be safely used in such environments, being a promising approach to exploit fine-grain parallelism in Ada tasks, and we identify the issues which still need to be further researched.info:eu-repo/semantics/publishedVersio
    • …
    corecore