2 research outputs found
Memory Aware Load Balance Strategy on a Parallel Branch-and-Bound Application
The latest trends in high-performance computing systems show an increasing
demand on the use of a large scale multicore systems in a efficient way, so
that high compute-intensive applications can be executed reasonably well.
However, the exploitation of the degree of parallelism available at each
multicore component can be limited by the poor utilization of the memory
hierarchy available. Actually, the multicore architecture introduces some
distinct features that are already observed in shared memory and distributed
environments. One example is that subsets of cores can share different subsets
of memory. In order to achieve high performance it is imperative that a careful
allocation scheme of an application is carried out on the available cores,
based on a scheduling model that considers the main performance bottlenecks, as
for example, memory contention. In this paper, the {\em Multicore Cluster
Model} (MCM) is proposed, which captures the most relevant performance
characteristics in multicores systems such as the influence of memory hierarchy
and contention. Better performance was achieved when a load balance strategy
for a Branch-and-Bound application applied to the Partitioning Sets Problem is
based on MCM, showing its efficiency and applicability to modern systems
Hierarchical Scheduling of DAG Structured Computations on Manycore Processors with Dynamic Thread Grouping ⋆
Abstract. Many computational solutions can be expressed as directed acyclic graphs (DAGs) with weighted nodes. In parallel computing, scheduling such DAGs onto manycore processors remains a fundamental challenge, since synchronization across dozens of threads and preserving precedence constraints can dramatically degrade the performance. In order to improve scheduling performance on manycore processors, we propose a hierarchical scheduling method with dynamic thread grouping, which schedules DAG structured computations at three different levels. At the top level, a supermanager separates threads into groups, each consisting of a manager thread and several worker threads. The supermanager dynamically merges and partitions the groups to adapt the scheduler to the input task dependency graphs. Through group merging and partitioning, the proposed scheduler can dynamically adjust to become a centralized scheduler, a distributed scheduler or somewhere in between, depending on the input graph. At the group level, managers collaboratively schedule tasks for their workers. At the within-group level, workers perform self-scheduling within their respective groups and execute tasks. We evaluate the proposed scheduler on the Sun Ultra-SPARC T2 (Niagara 2) platform that supports up to 64 hardware threads. With respect to various input task dependency graphs, the proposed scheduler exhibits superior performance when compared with other various baseline methods, including typical centralized and distributed schedulers. Key words: Manycore processor, hierarchical scheduling, thread grouping