28 research outputs found

    Optimizing iterative data-flow scientific applications using directed cyclic graphs

    Get PDF
    Data-flow programming models have become a popular choice for writing parallel applications as an alternative to traditional work-sharing parallelism. They are better suited to write applications with irregular parallelism that can present load imbalance. However, these programming models suffer from overheads related to task creation, scheduling and dependency management, limiting performance and scalability when tasks become too small. At the same time, many HPC applications implement iterative methods or multi-step simulations that create the same directed acyclic graphs of tasks on each iteration. By giving application programmers a way to express that a specific loop is creating the same task pattern on each iteration, we can create a single task directed acyclic graph (DAG) once and transform it into a cyclic graph. This cyclic graph is then reused for successive iterations, minimizing task creation and dependency management overhead. This paper presents the taskiter, a new construct we propose for the OmpSs-2 and OpenMP programming models, allowing the use of directed cyclic task graphs (DCTG) to minimize runtime overheads. Moreover, we present a simple immediate successor locality-aware heuristic that minimizes task scheduling overhead by bypassing the runtime task scheduler. We evaluate the implementation of the taskiter and the immediate successor heuristic in 8 iterative benchmarks. Using small task granularities, we obtain a geometric mean speedup of 2.56x over the reference OmpSs-2 implementation, and a 3.77x and 5.2x speedup over the LLVM and GCC OpenMP runtimes, respectively.This work was supported in part by the European Union’s Horizon 2020/EuroHPC Research and Innovation Programme (DEEP-SEA) under Grant 955606; in part by the Spanish State Research Agency—Ministry of Science and Innovation, Generalitat de Catalunya, under Project PCI2021121958 and Project 2021-SGR-01007; in part by the Spanish Ministry of Science and Technology under Contract PID2019-107255GB; and in part by Severo Ochoa under Grant CEX2021-001148-S/MCIN/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    Characterizing Performance and Cache Impacts of Code Multi-versioning on Multicore Architectures

    Full text link

    Task-parallel Runtime System Optimization Using Static Compiler Analysis

    Full text link

    SOCRATES - A seamless online compiler and system runtime autotuning framework for energy-aware applications

    Get PDF
    Configuring program parallelism and selecting optimal compiler options according to the underlying platform architecture is a difficult task. Tipically, this task is either assigned to the programmer or done by a standard one-fits-all policy generated by the compiler or runtime system. A runtime selection of the best configuration requires the insertion of a lot of glue code for profiling and runtime selection. This represents a programming wall for application developers. This paper presents a structured approach, called SOCRATES, based on an aspect-oriented language (LARA) and a runtime autotuner (mARGOt) to mitigate this problem. LARA has been used to hide the glue code insertion, thus separating the pure functional application description from extra-functional requirements. mARGOT has been used for the automatic selection of the best configuration according to the runtime evolution of the application.

    Managing Overheads in Asynchronous Many-Task Runtime Systems

    Get PDF
    Asynchronous Many-Task (AMT) runtime systems are based on the idea of dividing an algorithm into small units of work, known as tasks. The runtime system is then responsible for scheduling and executing these tasks in an efficient manner by taking into account the resources provided to it and the associated data dependencies between the tasks. One of the primary challenges faced by AMTs is managing such fine-grained parallelism and the overheads associated with creating, scheduling and executing tasks. This work develops methodologies for assessing and managing overheads associated with fine-grained task execution in HPX, our exemplar Asynchronous Many-Task runtime system. Known optimization techniques, viz. active message coalescing, task inlining and parallel loop iteration chunking are applied to HPX. Active message coalescing, where messages bound to the same destination are aggregated into a single message, is presented as a solution to minimize overheads associated with fine-grained communications. Methodologies and metrics for analyzing fine-grained communication overheads are developed. The metrics identified and implemented in this research aid in evaluating network efficiency by giving us an intrinsic view of the underlying network overhead that would be difficult to measure using conventional methods. Task inlining, a method that allows runtime systems to manage the overheads introduced by a large number of tasks by merging tasks together into one thread of execution, is presented as a technique for minimizing fine-grained task overheads. A runtime policy that dynamically decides whether to inline a task is developed and evaluated on different processor architectures. A methodology to derive a largely machine independent constant that allows controlling task granularity is developed. Finally, the machine independent constant derived in the context of task inlining is applied to chunking of parallel loop iterations, which confirms its applicability to reduce overheads, in the context of finding the optimal chunk size of the combined loop iterations

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Un framework pour l'exécution efficace d'applications sur GPU et CPU+GPU

    Get PDF
    Technological limitations faced by the semi-conductor manufacturers in the early 2000's restricted the increase in performance of the sequential computation units. Nowadays, the trend is to increase the number of processor cores per socket and to progressively use the GPU cards for highly parallel computations. Complexity of the recent architectures makes it difficult to statically predict the performance of a program. We describe a reliable and accurate parallel loop nests execution time prediction method on GPUs based on three stages: static code generation, offline profiling, and online prediction. In addition, we present two techniques to fully exploit the computing resources at disposal on a system. The first technique consists in jointly using CPU and GPU for executing a code. In order to achieve higher performance, it is mandatory to consider load balance, in particular by predicting execution time. The runtime uses the profiling results and the scheduler computes the execution times and adjusts the load distributed to the processors. The second technique, puts CPU and GPU in a competition: instances of the considered code are simultaneously executed on CPU and GPU. The winner of the competition notifies its completion to the other instance, implying the termination of the latter.Les verrous technologiques rencontrĂ©s par les fabricants de semi-conducteurs au dĂ©but des annĂ©es deux-mille ont abrogĂ© la flambĂ©e des performances des unitĂ©s de calculs sĂ©quentielles. La tendance actuelle est Ă  la multiplication du nombre de cƓurs de processeur par socket et Ă  l'utilisation progressive des cartes GPU pour des calculs hautement parallĂšles. La complexitĂ© des architectures rĂ©centes rend difficile l'estimation statique des performances d'un programme. Nous dĂ©crivons une mĂ©thode fiable et prĂ©cise de prĂ©diction du temps d'exĂ©cution de nids de boucles parallĂšles sur GPU basĂ©e sur trois Ă©tapes : la gĂ©nĂ©ration de code, le profilage offline et la prĂ©diction online. En outre, nous prĂ©sentons deux techniques pour exploiter l'ensemble des ressources disponibles d'un systĂšme pour la performance. La premiĂšre consiste en l'utilisation conjointe des CPUs et GPUs pour l'exĂ©cution d'un code. Afin de prĂ©server les performances il est nĂ©cessaire de considĂ©rer la rĂ©partition de charge, notamment en prĂ©disant les temps d'exĂ©cution. Le runtime utilise les rĂ©sultats du profilage et un ordonnanceur calcule des temps d'exĂ©cution et ajuste la charge distribuĂ©e aux processeurs. La seconde technique prĂ©sentĂ©e met le CPU et le GPU en compĂ©tition : des instances du code cible sont exĂ©cutĂ©es simultanĂ©ment sur CPU et GPU. Le vainqueur de la compĂ©tition notifie sa complĂ©tion Ă  l'autre instance, impliquant son arrĂȘt

    Parallel Processes in HPX: Designing an Infrastructure for Adaptive Resource Management

    Get PDF
    Advancement in cutting edge technologies have enabled better energy efficiency as well as scaling computational power for the latest High Performance Computing(HPC) systems. However, complexity, due to hybrid architectures as well as emerging classes of applications, have shown poor computational scalability using conventional execution models. Thus alternative means of computation, that addresses the bottlenecks in computation, is warranted. More precisely, dynamic adaptive resource management feature, both from systems as well as application\u27s perspective, is essential for better computational scalability and efficiency. This research presents and expands the notion of Parallel Processes as a placeholder for procedure definitions, targeted at one or more synchronous domains, meta data for computation and resource management as well as infrastructure for dynamic policy deployment. In addition to this, the research presents additional guidelines for a framework for resource management in HPX runtime system. Further, this research also lists design principles for scalability of Active Global Address Space (AGAS), a necessary feature for Parallel Processes. Also, to verify the usefulness of Parallel Processes, a preliminary performance evaluation of different task scheduling policies is carried out using two different applications. The applications used are: Unbalanced Tree Search, a reference dynamic graph application, implemented by this research in HPX and MiniGhost, a reference stencil based application using bulk synchronous parallel model. The results show that different scheduling policies provide better performance for different classes of applications; and for the same application class, in certain instances, one policy fared better than the others, while vice versa in other instances, hence supporting the hypothesis of the need of dynamic adaptive resource management infrastructure, for deploying different policies and task granularities, for scalable distributed computing

    Optimizing the Performance of Multi-threaded Linear Algebra Libraries Based on Task Granularity

    Get PDF
    Linear algebra libraries play a very important role in many HPC applications. As larger datasets are created everyday, it also becomes crucial for the multi-threaded linear algebra libraries to utilize the compute resources properly. Moving toward exascale computing, the current programming models would not be able to fully take advantage of the advances in memory hierarchies, computer architectures, and networks. Asynchronous Many-Task(AMT) Runtime systems would be the solution to help the developers to manage the available parallelism. In this Dissertation we propose an adaptive solution to improve the performance of a linear algebra library based on a set of compile-time and runtime characteristics including the machine architecture, the expression being evaluated, the number of cores to run the application on, the type of the operation, and also the size of the matrices, to get as close as possible to the highest performance. Our focus is on machine learning applications, where we are potentially dealing with very large matrices, which could make creating temporaries very expensive. For this purpose we selected Blaze C++ library, a high performance template-based math library that gives us this option to access the expression tree at compile time, along with HPX, a C++ standard library for concurrency and parallelism, as our runtime system. HPX, as an AMT runtime system, offers scalibility and fine-grained parallelism through creating lightweight threads for fast context switching between the threads. Finding the optimum task granularity is a challenge in AMTs. Creating too many small tasks would result in performance degradation due to scheduling overhead of the tasks, and creating too few tasks would lead to under-utilization of the resources. Our work focuses on finding the optimum task granularity for each specific problem. We tried two different approaches to model the relationship between the performance and the grain size, in order to find a range of grain sizes that could lead us to the maximum performance. First, we used polynomial functions to model how throughput changes in terms of the grain size, and number of cores. Although this method was successful in finding the range of grain sizes for maximum throughput, it was not physical. This motivated us to go deeper and try to develop an analytical model for execution time in terms of grain size for balanced parallel for loops. Based on the analytical model we propose a method to predict the range of grain size for minimum execution time. Moreover, since the parameters of the proposed model only depend on the system architecture, we suggest to use a parallel for-loop benchmark to find these parameters on a system, and use it to find the range of grain size for minimum execution time for arbitrary balanced parallel for-loop applications ran on the same machine. Having the mentioned models, we changed the current implementation of the HPX backend for Blaze by adding two parameters to represent the unit of work, and the number of units included in each task, for fine grained control of the parallelism, which is possible through HPX runtime system. Also, a complexity estimation function has been added to Blaze to estimate the number of floating point operations occurring in each unit of work. The model parameters estimated through the parallel for-loop benchmark could also be plugged into Blaze at compile time, in order to find the optimum range of grain size at runtime based on the matrix sizes and complexity of the operations. In the next step, we used the identified range of grain size to extend the previous implementation of splittable tasks, as an algorithm to control task granularity. We modified the current implementation by scheduling the tasks on idle cores directly instead of waiting for them to be stolen, and integrating the lower-bound of the analytical model as the threshold to stop splitting, in order to adapt the threshold to the system architecture and the application being executed
    corecore