237 research outputs found

    FastFlow tutorial

    Full text link
    FastFlow is a structured parallel programming framework targeting shared memory multicores. Its layered design and the optimized implementation of the communication mechanisms used to implement the FastFlow streaming networks provided to the application programmer as algorithmic skeletons support the development of efficient fine grain parallel applications. FastFlow is available (open source) at SourceForge (http://sourceforge.net/projects/mc-fastflow/). This work introduces FastFlow programming techniques and points out the different ways used to parallelize existing C/C++ code using FastFlow as a software accelerator. In short: this is a kind of tutorial on FastFlow.Comment: 49 pages + cove

    Autonomic management of multiple non-functional concerns in behavioural skeletons

    Full text link
    We introduce and address the problem of concurrent autonomic management of different non-functional concerns in parallel applications build as a hierarchical composition of behavioural skeletons. We first define the problems arising when multiple concerns are dealt with by independent managers, then we propose a methodology supporting coordinated management, and finally we discuss how autonomic management of multiple concerns may be implemented in a typical use case. The paper concludes with an outline of the challenges involved in realizing the proposed methodology on distributed target architectures such as clusters and grids. Being based on the behavioural skeleton concept proposed in the CoreGRID GCM, it is anticipated that the methodology will be readily integrated into the current reference implementation of GCM based on Java ProActive and running on top of major grid middleware systems.Comment: 20 pages + cover pag

    Power Aware Scheduling of Tasks on FPGAs in Data Centers

    Full text link
    A variety of computing platform like Field Programmable Gate Array (FPGA), Graphics Processing Unit (GPU) and multicore Central Processing Unit (CPU) in data centers are suitable for acceleration of data-intensive workloads. Especially, FPGA platforms in data centers are gaining popularity for high-performance computations due to their high speed, reconfigurable nature and cost effectiveness. Such heterogeneous, highly parallel computational architectures in data centers, combined with high-speed communication technologies like 5G, are becoming increasingly suitable for real-time applications. However, flexibility, cost-effectiveness, high computational capabilities, and energy efficiency remain challenging issues in FPGA based data centers. In this context an energy efficient scheduling solution is required to maximize the resource profitability of FPGA. This paper introduces a power-aware scheduling methodology aimed at accommodating periodic hardware tasks within the available FPGAs of a data center at their potentially maximum speed. This proposed methodology guarantees the execution of these tasks us ing the maximum number of parallel computation units possible to implement in the FPGAs, with minimum power consumption. The proposed scheduling methodology is implemented in a data center with multiple Alveo-50 Xilinx-AMD FPGAs and Vitis 2023 tool. The evidence from the implementation shows the proposed scheduling methodology is efficient compared to existing solutions

    Management in distributed systems: a semi-formal approach

    Get PDF
    Abstract Formal tools can be used in a "semi-formal" way to support distributed program analysis and tuning. We show how ORC has been used to reverse engineer a skeleton based programming environment and to remove one of the skeleton system's recognized weak points. The semi-formal approach adopted allowed these steps to be performed in a programmer-friendly way

    A power-aware, self-adaptive macro data flow framework

    Get PDF
    The dataflow programming model has been extensively used as an effective solution to implement efficient parallel programming frameworks. However, the amount of resources allocated to the runtime support is usually fixed once by the programmer or the runtime, and kept static during the entire execution. While there are cases where such a static choice may be appropriate, other scenarios may require to dynamically change the parallelism degree during the application execution. In this paper we propose an algorithm for multicore shared memory platforms, that dynamically selects the optimal number of cores to be used as well as their clock frequency according to either the workload pressure or to explicit user requirements. We implement the algorithm for both structured and unstructured parallel applications and we validate our proposal over three real applications, showing that it is able to save a significant amount of power, while not impairing the performance and not requiring additional effort from the application programmer

    A Reconfiguration Algorithm for Power-Aware Parallel Applications

    Get PDF
    In current computing systems, many applications require guarantees on their maximum power consumption to not exceed the available power budget. On the other hand, for some applications, it could be possible to decrease their performance, yet maintain an acceptable level, in order to reduce their power consumption. To provide such guarantees, a possible solution consists in changing the number of cores assigned to the application, their clock frequency, and the placement of application threads over the cores. However, power consumption and performance have different trends depending on the application considered and on its input. Finding a configuration of resources satisfying user requirements is, in the general case, a challenging task. In this article, we propose Nornir, an algorithm to automatically derive, without relying on historical data about previous executions, performance and power consumption models of an application in different configurations. By using these models, we are able to select a close-to-optimal configuration for the given user requirement, either performance or power consumption. The configuration of the application will be changed on-the-fly throughout the execution to adapt to workload fluctuations, external interferences, and/or application’s phase changes. We validate the algorithm by simulating it over the applications of the Parsec benchmark suit. Then, we implement our algorithm and we analyse its accuracy and overhead over some of these applications on a real execution environment. Eventually, we compare the quality of our proposal with that of the optimal algorithm and of some state-of-the-art solutions
    • …