1,093 research outputs found
A Comparative Analysis of STM Approaches to Reduction Operations in Irregular Applications
As a recently consolidated paradigm for optimistic concurrency in modern multicore architectures, Transactional Memory (TM)
can help to the exploitation of parallelism in irregular applications when data dependence information is not available up to run-
time. This paper presents and discusses how to leverage TM to exploit parallelism in an important class of irregular applications, the class that exhibits irregular reduction patterns. In order to test and compare our techniques with other solutions, they were implemented in a software TM system called ReduxSTM, that acts as a proof of concept. Basically, ReduxSTM combines two major ideas: a sequential-equivalent ordering of transaction commits that assures the correct result, and an extension of the underlying TM privatization mechanism to reduce unnecessary overhead due to reduction memory updates as well as unnecesary aborts and rollbacks. A comparative study of STM solutions, including ReduxSTM, and other more classical approaches to the parallelization of reduction operations is presented in terms of time, memory and overhead.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech
Factoring out ordered sections to expose thread-level parallelism
With the rise of multi-core processors, researchers are taking a new look at extending the applicability auto-parallelization techniques. In this paper, we identify a dependence pattern on which autoparallelization currently fails. This dependence pattern occurs for ordered sections, i.e. code fragments in a loop that must be executed atomically and in original program order. We discuss why these ordered sections prohibit current auto-parallelizers from working and we present a technique to deal with them. We experimentally demonstrate the efficacy of the technique, yielding significant overall program speedups
Redesigning OP2 Compiler to Use HPX Runtime Asynchronous Techniques
Maximizing parallelism level in applications can be achieved by minimizing
overheads due to load imbalances and waiting time due to memory latencies.
Compiler optimization is one of the most effective solutions to tackle this
problem. The compiler is able to detect the data dependencies in an application
and is able to analyze the specific sections of code for parallelization
potential. However, all of these techniques provided with a compiler are
usually applied at compile time, so they rely on static analysis, which is
insufficient for achieving maximum parallelism and producing desired
application scalability. One solution to address this challenge is the use of
runtime methods. This strategy can be implemented by delaying certain amount of
code analysis to be done at runtime. In this research, we improve the parallel
application performance generated by the OP2 compiler by leveraging HPX, a C++
runtime system, to provide runtime optimizations. These optimizations include
asynchronous tasking, loop interleaving, dynamic chunk sizing, and data
prefetching. The results of the research were evaluated using an Airfoil
application which showed a 40-50% improvement in parallel performance.Comment: 18th IEEE International Workshop on Parallel and Distributed
Scientific and Engineering Computing (PDSEC 2017
AutoParallel: A Python module for automatic parallelization and distributed execution of affine loop nests
The last improvements in programming languages, programming models, and
frameworks have focused on abstracting the users from many programming issues.
Among others, recent programming frameworks include simpler syntax, automatic
memory management and garbage collection, which simplifies code re-usage
through library packages, and easily configurable tools for deployment. For
instance, Python has risen to the top of the list of the programming languages
due to the simplicity of its syntax, while still achieving a good performance
even being an interpreted language. Moreover, the community has helped to
develop a large number of libraries and modules, tuning them to obtain great
performance.
However, there is still room for improvement when preventing users from
dealing directly with distributed and parallel computing issues. This paper
proposes and evaluates AutoParallel, a Python module to automatically find an
appropriate task-based parallelization of affine loop nests to execute them in
parallel in a distributed computing infrastructure. This parallelization can
also include the building of data blocks to increase task granularity in order
to achieve a good execution performance. Moreover, AutoParallel is based on
sequential programming and only contains a small annotation in the form of a
Python decorator so that anyone with little programming skills can scale up an
application to hundreds of cores.Comment: Accepted to the 8th Workshop on Python for High-Performance and
Scientific Computing (PyHPC 2018
Enhancing the performance of Decoupled Software Pipeline through Backward Slicing
The rapidly increasing number of cores available in multicore processors does
not necessarily lead directly to a commensurate increase in performance:
programs written in conventional languages, such as C, need careful
restructuring, preferably automatically, before the benefits can be observed in
improved run-times. Even then, much depends upon the intrinsic capacity of the
original program for concurrent execution. The subject of this paper is the
performance gains from the combined effect of the complementary techniques of
the Decoupled Software Pipeline (DSWP) and (backward) slicing. DSWP extracts
threadlevel parallelism from the body of a loop by breaking it into stages
which are then executed pipeline style: in effect cutting across the control
chain. Slicing, on the other hand, cuts the program along the control chain,
teasing out finer threads that depend on different variables (or locations).
parts that depend on different variables. The main contribution of this paper
is to demonstrate that the application of DSWP, followed by slicing offers
notable improvements over DSWP alone, especially when there is a loop-carried
dependence that prevents the application of the simpler DOALL optimization.
Experimental results show an improvement of a factor of ?1.6 for DSWP + slicing
over DSWP alone and a factor of ?2.4 for DSWP + slicing over the original
sequential code
- …