2,132 research outputs found

    Towards an OpenMP Specification for Critical Real-Time Systems

    Get PDF
    OpenMP is increasingly being considered as a convenient parallel programming model to cope with the performance requirements of critical real-time systems. Recent works demonstrate that OpenMP enables to derive guarantees on the functional and timing behavior of the system, a fundamental requirement of such systems. These works, however, focus only on the exploitation of fine grain parallelism and do not take into account the peculiarities of critical real-time systems, commonly composed of a set of concurrent functionalities. OpenMP allows exploiting the parallelism exposed within real-time tasks and among them. This paper analyzes the challenges of combining the concurrency model of real-time tasks with the parallel model of OpenMP. We demonstrate that OpenMP is suitable to develop advanced critical real-time systems by virtue of few changes on the specification, which allow the scheduling behavior desired (regarding execution priorities, preemption, migration and allocation strategies) in such systems.The research leading to these results has received funding from the Spanish Ministry of Science and Innovation, under contract TIN2015-65316-P, and from the European Union's Horizon 2020 Programme under the CLASS Project (www.classproject. eu), grant agreement No 780622.Peer ReviewedPostprint (author's final draft

    A time-predictable parallel programing model for real-time systems

    Get PDF
    The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of the high-performance and the embedded computing domains. Critical real-time embedded systems are increasingly concerned with providing higher performance to implement advanced functionalities in a predictable way. OpenMP, the de-facto parallel programming model for shared memory architectures in the high-performance computing domain, is gaining the attention to be used in embedded platforms. The reason is that OpenMP is a mature language that allows to efficiently exploit the huge computational capabilities of parallel embedded architectures. Moreover, OpenMP allows to express parallelism on top of the current technologies used in embedded designs (e.g., C/C++ applications). At a lower level, OpenMP provides a powerful task-centric model that allows to define very sophisticated types of regular and irregular parallelism. While OpenMP provides relevant features for embedded systems, both the programming interface and the execution model are completely agnostic to the timing requirements of real-time systems. This thesis evaluates the use of OpenMP to develop future critical real-time embedded systems. The first contribution analyzes the OpenMP specification from a timing perspective. It proposes new features to be incorporated in the OpenMP standard and a set of guidelines to implement critical real-time systems with OpenMP. The second contribution develops new methods to analyze and predict the timing behavior of parallel applications, so that the notion of parallelism can be safely incorporated into critical real-time systems. Finally, the proposed techniques are evaluated with both synthetic applications and real use cases parallelized with OpenMP. With the above contributions, this thesis pushes the limits of the use of task-based parallel programming models in general, and OpenMP in particular, in critical real-time embedded domains.Los recientes avances tecnológicos y tendencias de mercado estan causando un interesante fenómeno hacia la convergencia de dos dominios: la computacion de altas prestaciones y la computacion embebida. Hay cada vez mas interés en que los sistemas embebidos criticos de tiempo real proporcionen un mayor rendimiento para implementar funcionalidades avanzadas de una manera predecible. OpenMP, el modelo de programación paralela estándar para arquitecturas de memoria compartida en el dominio de la computación de altas prestaciones, está ganando atención para ser utilizado en systemas embebidos. La razón es que OpenMP es un lenguaje asentado que permite explotar eficientemente las enormes capacidades computacionales de las arquitecturas paralelas embebidas. Además, OpenMP permite expresar paralelismo sobre las tecnologías actuales utilizadas en los diseños embebidos (por ejemplo, aplicaciones C/C++). A un nivel inferior, OpenMP proporciona un potente modelo centrado en tareas que permite expresar tipos muy sofisticados de paralelismo regular e irregular. Si bien OpenMP proporciona funciones relevantes para los sistemas embebidos, tanto la interfaz de programación como el modelo de ejecución son completamente ajenos a los requisitos temporales de los sistemas de tiempo real. Esta tesis evalúa el uso de OpenMP para desarrollar los futuros sistemas embebidos criticos de timepo real. La primera contribución analiza la especificación de OpenMP desde una perspectiva temporal. Se propone nuevas funcionalidades que podrian ser incorporadas en el estándar de OpenMP y un conjunto de pautas para implementar sistemas críticos de tiempo real con OpenMP. La segunda contribución desarrolla nuevos métodos para analizar y predecir el comportamiento temporal de las aplicaciones paralelas, de modo que la noción de paralelismo se pueda incorporar de manera segura en sistemas críticos de tiempo real. Finalmente, las técnicas propuestas se evaluan con aplicaciones sintéticas y casos de uso reales paralelizados con OpenMP. Con las mencionadas contribuciones, esta tesis amplía los límites en el uso de los modelos de programación paralela basados en tarea en general, y de OpenMP en particular, en dominios embebidos criticos de tiempo real

    Programming MPSoC platforms: Road works ahead

    Get PDF
    This paper summarizes a special session on multicore/multi-processor system-on-chip (MPSoC) programming challenges. The current trend towards MPSoC platforms in most computing domains does not only mean a radical change in computer architecture. Even more important from a SW developer´s viewpoint, at the same time the classical sequential von Neumann programming model needs to be overcome. Efficient utilization of the MPSoC HW resources demands for radically new models and corresponding SW development tools, capable of exploiting the available parallelism and guaranteeing bug-free parallel SW. While several standards are established in the high-performance computing domain (e.g. OpenMP), it is clear that more innovations are required for successful\ud deployment of heterogeneous embedded MPSoC. On the other hand, at least for coming years, the freedom for disruptive programming technologies is limited by the huge amount of certified sequential code that demands for a more pragmatic, gradual tool and code replacement strategy

    A static scheduling approach to enable safety-critical OpenMP applications

    Get PDF
    Parallel computation is fundamental to satisfy the performance requirements of advanced safety-critical systems. OpenMP is a good candidate to exploit the performance opportunities of parallel platforms. However, safety-critical systems are often based on static allocation strategies, whereas current OpenMP implementations are based on dynamic schedulers. This paper proposes two OpenMP-compliant static allocation approaches: an optimal but costly approach based on an ILP formulation, and a sub-optimal but tractable approach that computes a worst-case makespan bound close to the optimal one.This work is funded by the EU projects P-SOCRATES (FP7-ICT-2013-10) and HERCULES (H2020/ICT/2015/688860), and the Spanish Ministry of Science and Innovation under contract TIN2015-65316-P.Peer ReviewedPostprint (author's final draft

    Characterizing traits of coordination

    Full text link
    How can one recognize coordination languages and technologies? As this report shows, the common approach that contrasts coordination with computation is intellectually unsound: depending on the selected understanding of the word "computation", it either captures too many or too few programming languages. Instead, we argue for objective criteria that can be used to evaluate how well programming technologies offer coordination services. Of the various criteria commonly used in this community, we are able to isolate three that are strongly characterizing: black-box componentization, which we had identified previously, but also interface extensibility and customizability of run-time optimization goals. These criteria are well matched by Intel's Concurrent Collections and AstraKahn, and also by OpenCL, POSIX and VMWare ESX.Comment: 11 pages, 3 table

    Using Cognitive Computing for Learning Parallel Programming: An IBM Watson Solution

    Full text link
    While modern parallel computing systems provide high performance resources, utilizing them to the highest extent requires advanced programming expertise. Programming for parallel computing systems is much more difficult than programming for sequential systems. OpenMP is an extension of C++ programming language that enables to express parallelism using compiler directives. While OpenMP alleviates parallel programming by reducing the lines of code that the programmer needs to write, deciding how and when to use these compiler directives is up to the programmer. Novice programmers may make mistakes that may lead to performance degradation or unexpected program behavior. Cognitive computing has shown impressive results in various domains, such as health or marketing. In this paper, we describe the use of IBM Watson cognitive system for education of novice parallel programmers. Using the dialogue service of the IBM Watson we have developed a solution that assists the programmer in avoiding common OpenMP mistakes. To evaluate our approach we have conducted a survey with a number of novice parallel programmers at the Linnaeus University, and obtained encouraging results with respect to usefulness of our approach

    A Functional Safety OpenMP∗ for Critical Real-Time Embedded Systems

    Get PDF
    OpenMP* has recently gained attention in the embedded domain by virtue of the augmentations implemented in the last specification. Yet, the language has a minimal impact in the embedded real-time domain mostly due to the lack of reliability and resiliency mechanisms. As a result, functional safety properties cannot be guaranteed. This paper analyses in detail the latest specification to determine whether and how the compliant OpenMP implementations can guarantee functional safety. Given the conclusions drawn from the analysis, the paper describes a set of modifications to the specification, and a set of requirements for compiler and runtime systems to qualify for safety critical environments. Through the proposed solution, OpenMP can be used in critical real-time embedded systems without compromising functional safety.This work was funded by the EU project P-SOCRATES (FP7-ICT-2013- 10) and the Spanish Ministry of Science and Innovation under contract TIN2015- 65316-P.Peer ReviewedPostprint (author's final draft

    Safe Parallelism: Compiler Analysis Techniques for Ada and OpenMP

    Get PDF
    There is a growing need to support parallel computation in Ada to cope with the performance requirements of the most advanced functionalities of safety-critical systems. In that regard, the use of parallel programming models is paramount to exploit the benefits of parallelism. Recent works motivate the use of OpenMP for being a de facto standard in high-performance computing for programming shared memory architectures. These works address two important aspects towards the introduction of OpenMP in Ada: the compatibility of the OpenMP syntax with the Ada language, and the interoperability of the OpenMP and the Ada runtimes, demonstrating that OpenMP complements and supports the structured parallelism approach of the tasklet model. This paper addresses a third fundamental aspect: functional safety from a compiler perspective. Particularly, it focuses on race conditions and considers the fine-grain and unstructured capabilities of OpenMP. Hereof, this paper presents a new compiler analysis technique that: (1) identifies potential race conditions in parallel Ada programs based on OpenMP or Ada tasks or both, and (2) provides solutions for the detected races.This work was supported by the Spanish Ministry of Science and Innovation under contract TIN2015-65316-P, and by the FCT (Portuguese Foundation for Science and Technology) within the CISTER Research Unit (CEC/04234).Peer ReviewedPostprint (author's final draft
    corecore