33 research outputs found
Parallel programming issues and what the compiler can do to help
Twenty-first century parallel programming models are
becoming real complex due to the diversity of architectures they need
to target (Multi- and Many-cores, GPUs, FPGAs, etc.). What if we
could use one programming model to rule them all, one programming
model to find them, one programming model to bring them all and in
the darkness bind them, in the land of MareNostrum where the
Applications lie. OmpSs programming model is an attempt to do so,
by means of compiler directives.
Compilers are essential tools to exploit applications and the
architectures the run on. In this sense, compiler analysis and
optimization techniques have been widely studied, in order to
produce better performing and less consuming codes.
In this paper we present two uses of several analyses we have
implemented in the Mercurium[3] source-to-source compiler: a) the
first use is to help users with correctness hints regarding the usage of
the OpenMP and OmpSs tasks; b) the second use is to be able to
execute OpenMP in embedded systems, with very little memory,
thanks to calculating the Task Dependency Graph of the application
at compile time. We also present the next steps of our work: a)
extending range analysis for analyzing OpenMP and OmpSs
recursive applications, and b) modeling applications using OmpSs
and future OpenMP4.1 tasks priorities feature
Techniques for reducing and bounding OpenMP dynamic memory
OpenMP offers a tasking model very convenient
to develop critical real-time parallel applications by virtue of
its time predictability. However, current implementations make
an intensive use of dynamic memory to efficiently manage the
parallel execution. This jeopardizes the qualification process
and limits the use of OpenMP in architectures with limited
amount of memory. This work introduces an OpenMP framework
that statically allocates the data structures needed to efficiently
manage parallel execution in OpenMP programs. We achieve the
same performance than current implementations, while bounding
and reducing the dynamic memory requirements at runtime
Taskgraph: A Low Contention OpenMP Tasking Framework
OpenMP is the de-facto standard for shared memory systems in High-Performance
Computing (HPC). It includes a task-based model that offers a high-level of
abstraction to effectively exploit highly dynamic structured and unstructured
parallelism in an easy and flexible way. Unfortunately, the run-time overheads
introduced to manage tasks are (very) high in most common OpenMP frameworks
(e.g., GCC, LLVM), which defeats the potential benefits of the tasking model,
and makes it suitable for coarse-grained tasks only. This paper presents
taskgraph, a framework that uses a task dependency graph (TDG) to represent a
region of code implemented with OpenMP tasks in order to reduce the run-time
overheads associated with the management of tasks, i.e., contention and
parallel orchestration, including task creation and synchronization. The TDG
avoids the overheads related to the resolution of task dependencies and greatly
reduces those deriving from the accesses to shared resources. Moreover, the
taskgraph framework introduces in OpenMP the record-and-replay execution model
that accelerates the taskgraph region from its second execution. Overall, the
multiple optimizations presented in this paper allow exploiting fine-grained
OpenMP tasks to cope with the trend in current applications pointing to
leverage massive on-node parallelism, fine-grained and dynamic scheduling
paradigms. The framework is implemented on LLVM 15.0. Results show that the
taskgraph implementation outperforms the vanilla OpenMP system in terms of
performance and scalability, for all structured and unstructured parallelism,
and considering coarse and fine grained tasks. Furthermore, the proposed
framework considerably reduces the performance gap between the task and the
thread models of OpenMP
Anà lisi de metodologies docents a l’aula catalana: la classe magistral i la classe invertida
Aquest treball s’ha iniciat com un eforç per avaluar les diferents tecnologies i metodologies
docents que es poden usar a l'aula. En aquest context, hem analitzat quines d'aquestes
tecnologies i metodologies s'apliquen realment a les escoles catalanes i quines dificultats
troben els professors per a la seva aplicació. Finalment, hem fet un estudi de camp amb
una de les metodologies, per conèixer de primera mà el funcionament dins d'una classe.
És per això que aquest treball està dividit en tres estudis, que van des de l'anà lisi més
teòric i abstracte, a l'anà lisi més prà ctic i especÃfic. Els tres estudis que composen el treball
són els següents:
La primera part és un estudi fet per la bibliografia dels darrers quinze anys sobre les
diferents tecnologies i metodologies docents que s'ha aplicat a la docència, des de cursos
de primà ria fins a cursos universitaris, passant per la secundà ria obligatòria i postobligatòria.
D'aquest estudi hem estret una biblioteca amb les tecnologies i metodologies
que ens han semblat més interessants i que han tingut un impacte més important a la
docència.
La segona part és un estudi de camp fet amb una mostra de 46 professors de tot el territori
català , des de primà ria fins a la universitat, per a conèixer quin és l'estat real a les escoles
pel que fa l’ús de les TIC i l’aplicació de metodologies que hem estudiat a la primera part
del treball. Aquest estudi s'ha fet basat en enquestes, utilitzant el qüestionari per a la
recollida de dades, escollint un mostreig no probabilÃstic i fent una anà lisi transversal.
La tercera part és un estudi de camp sobre la aplicació de la metodologia invertida a un
curs de cicles formatius de grau superior. Per a aquest estudi s'ha utilitzat la prà ctica
reflexiva durant el curs, per tal de millorar la metodologia. De les tècniques estudiades,
hem escollit la metodologia invertida per què és de les més noves, que menys s'ha utilitzat
i que pensem que té més potencial. Per a analitzar el funcionament de la metodologia s'ha
utilitzat, d'una banda, els diferents instruments d'avaluació executats durant el curs, i
d'altra, un estudi basat en enquestes de satisfacció als estudiants. Per la recollida
d'informació també s'ha usat el qüestionari i, evidentment, també ha estat un mostreig no
probabilÃstic. Aquest també ha estat un estudi transversal, que s'ha fet al final del curs on
s'ha aplicat la metodologia
OpenMP static TDG runtime implementation and its usage in heterogeneous computing
OpenMP being the standard to use in shared memory
parallel programming, it offers the possibility to parallelize
sequential program with accelerators by using target directive.
However, CUDA Graph as a new, efficient feature is not
supported yet. In this work, we present an automatic transformation
of OpenMP TDG to CUDA Graph, increasing the
programmability of the latter
Una eina per verificar propietats en xarxes de petri usant à lgebra lineal
El codi de l'eina desenvolupada no es pot pujar per aquesta aplicació. Els interessats s'han de possar en contacte amb l'autor
OpenMP to CUDA graphs: a compiler-based transformation to enhance the programmability of NVIDIA devices
Heterogeneous computing is increasingly being used in a diversity of computing systems, ranging from HPC to the real-time embedded domain, to cope with the performance requirements. Due to the variety of accelerators, e.g., FPGAs, GPUs, the use of high-level parallel programming models is desirable to exploit the performance capabilities of them, while maintaining an adequate productivity level. In that regard, OpenMP is a well-known high-level programming model that incorporates powerful task and accelerator models capable of efficiently exploiting structured and unstructured parallelism in heterogeneous computing. This paper presents a novel compiler transformation technique that automatically transforms OpenMP code into CUDA graphs, combining the benefits of programmability of a high-level programming model such as OpenMP, with the performance benefits of a low-level programming model such as CUDA. Evaluations have been performed on two NVIDIA GPUs from the HPC and embedded domains, i.e., the V100 and the Jetson AGX respectively.This work has been supported by the EU H2020 project AMPERE under the grant agreement no. 871669.Peer ReviewedPostprint (author's final draft
High-level compiler analysis for OpenMP
Nowadays, applications from dissimilar domains, such as high-performance computing and high-integrity systems, require levels of performance that can only be achieved by means of sophisticated heterogeneous architectures. However, the complex nature of such architectures hinders the production of efficient code at acceptable levels of time and cost. Moreover, the need for exploiting parallelism adds complications of its own (e.g., deadlocks, race conditions,...). In this context, compiler analysis is fundamental for optimizing parallel programs. There is however a trade-off between complexity and profit: low complexity analyses (e.g., reaching definitions) provide information that may be insufficient for many relevant transformations, and complex analyses based on mathematical representations (e.g., polyhedral model) give accurate results at a high computational cost.
A range of parallel programming models providing different levels of programmability, performance and portability enable the exploitation of current architectures. However, OpenMP has proved many advantages over its competitors: 1) it delivers levels of performance comparable to highly tunable models such as CUDA and MPI, and better robustness than low level libraries such as Pthreads; 2) the extensions included in the latest specification meet the characteristics of current heterogeneous architectures (i.e., the coupling of a host processor to one or more accelerators, and the capability of expressing fine-grained, both structured and unstructured, and highly-dynamic task parallelism); 3) OpenMP is widely implemented by several chip (e.g., Kalray MPPA, Intel) and compiler (e.g., GNU, Intel) vendors; and 4) although currently the model lacks resiliency and reliability mechanisms, many works, including this thesis, pursue their introduction in the specification.
This thesis addresses the study of compiler analysis techniques for OpenMP with two main purposes: 1) enhance the programmability and reliability of OpenMP, and 2) prove OpenMP as a suitable model to exploit parallelism in safety-critical domains. Particularly, the thesis focuses on the tasking model because it offers the flexibility to tackle the parallelization of algorithms with load imbalance, recursiveness and uncountable loop based kernels. Additionally, current works have proved the time-predictability of this model, shortening the distance towards its introduction in safety-critical domains.
To enable the analysis of applications using the OpenMP tasking model, the first contribution of this thesis is the extension of a set of classic compiler techniques with support for OpenMP.
As a basis for including reliability mechanisms, the second contribution consists of the development of a series of algorithms to statically detect situations involving OpenMP tasks, which may lead to a loss of performance, non-deterministic results or run-time failures.
A well-known problem of parallel processing related to compilers is the static scheduling of a program represented by a directed graph. Although the literature is extensive in static scheduling techniques, the work related to the generation of the task graph at compile-time is very scant. Compilers are limited by the knowledge they can extract, which depends on the application and the programming model. The third contribution of this thesis is the generation of a predicated task dependency graph for OpenMP that can be interpreted by the runtime in such a way that the cost of solving dependences is reduced to the minimum.
With the previous contributions as a basis for determining the functional safety of OpenMP, the final contribution of this thesis is the adaptation of OpenMP to the safety-critical domain considering two directions: 1) indicating how OpenMP can be safely used in such a domain, and 2) integrating OpenMP into Ada, a language widely used in the safety-critical domain.Actualment, aplicacions de dominis diversos com la computació d'altes prestacions i els sistemes d'alta integritat, requereixen nivells de rendiment assolibles només mitjançant arquitectures heterogènies sofisticades. No obstant, la natura complexa d'aquestes dificulta la producció de codi eficient en un temps i cost acceptables. A més, la necessitat d’explotar paral·lelisme introdueix complicacions en sà mateixa (p. ex. bloqueig mutu, condicions de carrera,...). En aquest context, l'anà lisi de compiladors és fonamental per optimitzar programes paral·lels. Existeix però un equilibri entre complexitat i beneficis: la informació obtinguda amb anà lisis simples (p. ex. definicions abastables) pot ser insuficient per moltes transformacions rellevants, i anà lisis complexos basats en models matemà tics (p. ex. model polièdric) faciliten resultats acurats a un alt cost computacional. Existeixen molts models de programació paral·lela que proporcionen diferents nivells de programabilitat, rendiment i portabilitat per l'explotació de les arquitectures actuals. En aquest marc, OpenMP ha demostrat molts avantatges respecte dels seus competidors: 1) el seu nivell de rendiment és comparable a models molt ajustables com CUDA i MPI, i proporciona més robustesa que llibreries de baix nivell com Pthreads; 2) les extensions que inclou la darrera especificació satisfan les caracterÃstiques de les actuals arquitectures heterogènies (és a dir, l’acoblament d’un processador principal i un o més acceleradors, i la capacitat d'expressar paral·lelisme de tasques de gra fi, ja sigui estructurat o sense estructura; 3) OpenMP és à mpliament implementat per venedors de xips (p. ex. Kalray MPPA, Intel) i compiladors (p. ex. GNU, Intel); i 4) tot i que el model actual manca de mecanismes de resiliència i fiabilitat, molts treballs, incloent aquesta tesi, busquen la seva introducció a l'especificació. Aquesta tesi adreça l'estudi de tècniques d’anà lisi de compiladors amb dos objectius: 1) millorar la programabilitat i la fiabilitat de OpenMP, i 2) provar que OpenMP és un model adequat per explotar paral·lelisme en sistemes crÃtics. En particular, la tesi es centra en el model de tasques per què aquest ofereix la flexibilitat per abordar aplicacions amb problemes de balanceig de cà rrega, recursivitat i bucles incomptables. A més, treballs recents han provat la predictibilitat en qüestió de temps del model, escurçant la distà ncia cap a la seva introducció en sistemes crÃtics. Per a poder analitzar aplicacions que utilitzen el model de tasques d’OpenMP, la primera contribució d’aquesta tesi consisteix en l’extensió d'un conjunt de tècniques clà ssiques de compilació per suportar OpenMP. Com a base per incloure mecanismes de fiabilitat, la segona contribució consisteix en el desenvolupament duna sèrie d'algorismes per detectar de forma està tica situacions que involucren tasques d’OpenMP, i que poden conduir a una pèrdua de rendiment, resultats no deterministes, o fallades en temps d’execució. Un problema ben conegut del processament paral·lel relacionat amb els compiladors és la planificació està tica d’un programa representat mitjançant un graf dirigit. Tot i que la literatura sobre planificació està tica és extensa, aquella relacionada amb la generació del graf en temps de compilació és molt escassa. Els compiladors estan limitats pel coneixement que poden extreure, que depèn de l’aplicació i del model de programació. La tercera contribució de la tesi és la generació d’un graf de dependències enriquit que pot ser interpretat pel sistema en temps d’execució de manera que el cost de resoldre les dependències sigui mÃnim. Amb les anteriors contribucions com a base per a determinar la seguretat funcional de OpenMP, la darrera contribució de la tesi consisteix en adaptar OpenMP a sistemes crÃtics, explorant dues direccions: 1) indicar com OpenMP es pot utilitzar de forma segura en un domini com, i 2) integrar OpenMP en Ada, un llenguatge molt utilitzat en el domini de seguretat.Postprint (published version
Enabling Ada and OpenMP runtimes interoperability through template-based execution
The growing trend to support parallel computation to enable the performance gains of the recent hardware architectures is increasingly present in more conservative domains, such as safety-critical systems. Applications such as autonomous driving require levels of performance only achievable by fully leveraging the potential parallelism in these architectures. To address this requirement, the Ada language, designed for safety and robustness, is considering to support parallel features in the next revision of the standard (Ada 202X). Recent works have motivated the use of OpenMP, a de facto standard in high-performance computing, to enable parallelism in Ada, showing the compatibility of the two models, and proposing static analysis to enhance reliability. This paper summarizes these previous efforts towards the integration of OpenMP into Ada to exploit its benefits in terms of portability, programmability and performance, while providing the safety benefits of Ada in terms of correctness. The paper extends those works proposing and evaluating an application transformation that enables the OpenMP and the Ada runtimes to operate (under certain restrictions) as they were integrated. The objective is to allow Ada programmers to (naturally) experiment and evaluate the benefits of parallelizing concurrent Ada tasks with OpenMP while ensuring the compliance with both specifications.This work was supported by the Spanish Ministry of Science and Innovation under contract TIN2015-65316-P, by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreements no. 611016 and No 780622, and by the FCT (Portuguese Foundation for Science and Technology) within the CISTER Research Unit (CEC/04234).Peer ReviewedPostprint (published version
Framework for the Analysis and Configuration of Real-Time OpenMP Applications
High-performance cyber-physical applications impose several requirements with respect to performance, functional correctness and non-functional aspects. Nowadays, the design of these systems usually follows a model-driven approach, where models generate executable applications, usually with an automated approach. As these applications might execute in different parallel environments, their behavior becomes very hard to predict, and making the verification of non-functional requirements complicated. In this regard, it is crucial to analyse and understand the impact that the mapping and scheduling of computation have on the real-time response of the applications. In fact, different strategies in these steps of the parallel orchestration may produce significantly different interference, leading to different timing behaviour.Tuning the application parameters and the system configuration proves to be one of the most fitting solutions. The design space can however be very cumbersome for a developer to test manually all combinations of application and system configurations. This paper presents a methodology and a toolset to profile, analyse, and configure the timing behaviour of high-performance cyber-physical applications and the target platforms. The methodology leverages on the possibility of generating a task dependency graph representing the parallel computation to evaluate, through measurements, different mapping configurations and select the one that minimizes response time.This work has been co-funded by the European commission through the AMPERE (H2020 grant agreement N° 745601) project.Peer ReviewedPostprint (author's final draft