1,041 research outputs found
Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism
The shift of the microprocessor industry towards multicore architectures has
placed a huge burden on the programmers by requiring explicit parallelization
for performance. Implicit Parallelization is an alternative that could ease the
burden on programmers by parallelizing applications ???under the covers??? while
maintaining sequential semantics externally. This thesis develops a novel
approach for thinking about parallelism, by casting the problem of
parallelization in terms of instruction criticality. Using this approach,
parallelism in a program region is readily identified when certain conditions
about fetch-criticality are satisfied by the region. The thesis formalizes this
approach by developing a criticality-driven model of task-based
parallelization. The model can accurately predict the parallelism that would be
exposed by potential task choices by capturing a wide set of sources of
parallelism as well as costs to parallelization.
The criticality-driven model enables the development of two key components for
Implicit Parallelization: a task selection policy, and a bottleneck analysis
tool. The task selection policy can partition a single-threaded program into
tasks that will profitably execute concurrently on a multicore architecture in
spite of the costs associated with enforcing data-dependences and with
task-related actions. The bottleneck analysis tool gives feedback to the
programmers about data-dependences that limit parallelism. In particular, there
are several ???accidental dependences??? that can be easily removed with large
improvements in parallelism. These tools combine into a systematic methodology
for performance tuning in Implicit Parallelization. Finally, armed with the
criticality-driven model, the thesis revisits several architectural design
decisions, and finds several encouraging ways forward to increase the scope of
Implicit Parallelization.unpublishednot peer reviewe
A Survey on Thread-Level Speculation Techniques
Producción CientíficaThread-Level Speculation (TLS) is a promising technique that allows the parallel execution of sequential code without relying on a prior, compile-time-dependence analysis. In this work, we introduce the technique, present a taxonomy of TLS solutions, and summarize and put into perspective the most relevant advances in this field.MICINN (Spain) and ERDF program of the European Union: HomProg-HetSys project (TIN2014-58876-P), CAPAP-H5 network (TIN2014-53522-REDT), and COST Program Action IC1305: Network for Sustainable Ultrascale Computing (NESUS)
Runtime-adaptive generalized task parallelism
Multi core systems are ubiquitous nowadays and their number is ever increasing. And while, limited by physical constraints, the computational power of the individual cores has been stagnating or even declining for years, a solution to effectively utilize the computational power that comes with the additional cores is yet to be found. Existing approaches to automatic parallelization are often highly specialized to exploit the parallelism of specific program patterns, and thus to parallelize a small subset of programs only. In addition, frequently used invasive runtime systems prohibit the combination of different approaches, which impedes the practicality of automatic parallelization. In the following thesis, we show that specializing to narrowly defined program patterns is not necessary to efficiently parallelize applications coming from different domains. We develop a generalizing approach to parallelization, which, driven by an underlying mathematical optimization problem, is able to make qualified parallelization decisions taking into account the involved runtime overhead. In combination with a specializing, adaptive runtime system the approach is able to match and even exceed the performance results achieved by specialized approaches.Mehrkernsysteme sind heutzutage allgegenwärtig und finden täglich weitere Verbreitung. Und während, limitiert durch die Grenzen des physikalisch Machbaren, die Rechenkraft der einzelnen Kerne bereits seit Jahren stagniert oder gar sinkt, existiert bis heute keine zufriedenstellende Lösung zur effektiven Ausnutzung der gebotenen Rechenkraft, die mit der steigenden Anzahl an Kernen einhergeht. Existierende Ansätze der automatischen Parallelisierung sind häufig hoch spezialisiert auf die Ausnutzung bestimmter Programm-Muster, und somit auf die Parallelisierung weniger Programmteile. Hinzu kommt, dass häufig verwendete invasive Laufzeitsysteme die Kombination mehrerer Parallelisierungs-Ansätze verhindern, was der Praxistauglichkeit und Reichweite automatischer Ansätze im Wege steht. In der Ihnen vorliegenden Arbeit zeigen wir, dass die Spezialisierung auf eng definierte Programmuster nicht notwendig ist, um Parallelität in Programmen verschiedener Domänen effizient auszunutzen. Wir entwickeln einen generalisierenden Ansatz der Parallelisierung, der, getrieben von einem mathematischen Optimierungsproblem, in der Lage ist, fundierte Parallelisierungsentscheidungen unter Berücksichtigung relevanter Kosten zu treffen. In Kombination mit einem spezialisierenden und adaptiven Laufzeitsystem ist der entwickelte Ansatz in der Lage, mit den Ergebnissen spezialisierter Ansätze mitzuhalten, oder diese gar zu übertreffen.Part of the work presented in this thesis was performed in the context of the SoftwareCluster project EMERGENT (http://www.software-cluster.org). It was funded by the German Federal Ministry of Education and Research (BMBF) under grant no. “01IC10S01”. Later work has been supported, also by the German Federal Ministry of Education and Research (BMBF), through funding for the Center for IT-Security, Privacy and Accountability (CISPA) under grant no. “16KIS0344”
Compile-time support for thread-level speculation
Una de las principales preocupaciones de las ciencias de la computación es el estudio de las capacidades paralelas tanto de programas como de los procesadores que los ejecutan. Existen varias razones que hacen muy deseable el desarrollo de técnicas que paralelicen automáticamente el código. Entre ellas se encuentran el inmenso número de programas secuenciales existentes ya escritos, la complejidad de los lenguajes de programación paralelos, y los conocimientos que se requieren para paralelizar un código. Sin embargo, los actuales mecanismos de paralelización automática implementados en los compiladores comerciales no son capaces de paralelizar la mayoría de los bucles en un código [1], debido a la dependencias de datos que existen entre ellos [2]. Por lo tanto, se hace necesaria la búsqueda de nuevas técnicas, como la paralelización especulativa [3-5], que saquen beneficio de las potenciales capacidades paralelas del hardware y arquitecturas multiprocesador actuales. Sin embargo, ésta y otras técnicas requieren la intervención manual de programadores experimentados.
Antes de ofrecer soluciones alternativas, se han evaluado las capacidades de paralelización de los compiladores comerciales, exponiendo las limitaciones de los mecanismos de paralelización automática que implementan. El estudio revela que estos mecanismos de paralelización automática sólo alcanzan un 19% de speedup en promedio para los benchmarks del SPEC CPU2006 [6], siendo este un resultado significativamente inferior al obtenido por técnicas de paralelización especulativa [7]. Sin embargo, la paralelización especulativa requiere una extensa modificación manual del código por parte de programadores.
Esta Tesis aborda este problema definiendo una nueva cláusula OpenMP [8], llamada ¿speculative¿, que permite señalar qué variables pueden llevar a una violación de dependencia. Además, esta Tesis también propone un sistema en tiempo de compilación que, usando la información sobre los accesos a las variables que proporcionan las cláusulas OpenMP, añade automáticamente todo el código necesario para gestionar la ejecución especulativa de un programa. Esto libera al programador de modificar el código manualmente, evitando posibles errores y una tediosa tarea. El código generado por nuestro sistema enlaza con la librería de ejecución especulativamente paralela desarrollada por Estebanez, García-Yagüez, Llanos y Gonzalez-Escribano [9,10].Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos
Performance Optimization Strategies for Transactional Memory Applications
This thesis presents tools for Transactional Memory (TM) applications that cover multiple TM systems (Software, Hardware, and hybrid TM) and use information of all different layers of the TM software stack. Therefore, this thesis addresses a number of challenges to extract static information, information about the run time behavior, and expert-level knowledge to develop these new methods and strategies for the optimization of TM applications
Lightweight speculative support for aggressive auto-parallelisation tools
With the recent move to multi-core architectures it has become important to create the
means to exploit the performance made available to us by these architectures. Unfortunately
parallel programming is often a difficult and time-intensive process, even
to expert programmers. Auto-parallelisation tools have aimed to fill the performance
gap this has created, but static analysis commonly employed by such tools are unable
to provide the performance improvements required due to lack of information at
compile-time. More recent aggressive parallelisation tools use profiled-execution to
discover new parallel opportunities, but these tools are inherently unsafe. They require
either manual confirmation that their changes are safe, completely ruling out auto-parallelisation,
or they rely upon speculative execution such as software thread-level
speculation (SW-TLS) to confirm safe execution at runtime.
SW-TLS schemes are currently very heavyweight and often fail to provide speedups for
a program. Performance gains are dependent upon suitable parallel opportunities, correct
selection and configuration, and appropriate execution platforms. Little research
has been completed into the automated implemention of SW-TLS programs.
This thesis presents an automated, machine-learning based technique to select and configure
suitable speculation schemes when appropriate. This is performed by extracting
metrics from potential parallel opportunities and using them to determine if a loop is
suitable for speculative execution and if so, which speculation policy should be used.
An extensive evaluation of this technique is presented, verifying that SW-TLS configuration
can indeed be automated and provide reliable performance gains. This work
has shown that on an 8-core machine, up to 7.75X and a geometric mean of 1.64X
speedups can be obtained through automatic configuration, providing on average 74%
of the speedup obtainable through manual configuration.
Beyond automated configuration, this thesis explores the idea that many SW-TLS
schemes focus too heavily on recovery from detecting a dependence violation. Doing
so often results in worse than sequential performance for many real-world applications,
therefore this work hypothesises that for many highly-likely parallel candidates,
discovered through aggressive parallelisation techniques, would benefit from a simple
dependence check without the ability to roll back. Dependence violations become
extremely expensive in this scenario, however this would be incredibly rare. With a
thorough evaluation of the technique this thesis confirms the hypothesis whilst achieving speedups of up to 22.53X, and a geometric mean of 2.16X on a 32-core machine.
In a competitive scheduling scenario performance loss can be restricted to at least sequential
speeds, even when a dependence has been detected.
As a means to lower costs further this thesis explores other platforms to aid in the execution
of speculative error checking. Introduced is the use of a GPU to offload some of
the costs to during execution that confirms that using an auxiliary device is a legitimate
means to obtain further speedup. Evaluation demonstrates that doing so can achieve
up to 14.74X and a geometric mean of 1.99X speedup on a 12-core hyperthreaded machine.
Compared to standard CPU-only techniques this performs slightly slower with
a geometric mean of 0.96X speedup, however this is likely to improve with upcoming
GPU designs.
With the knowledge that GPU’s can be used to reduce speculation costs, this thesis
also investigates their use to speculatively improve execution times also. Presented
is a novel SW-TLS scheme that targets GPU-based execution for use with aggressive
auto-parallelisers. This scheme is executed using a competitive scheduling model, ensuring
performance is no lower than sequential execution, whilst being able to provide
speedups of up to 99X and on average 3.2X over sequential. On average this technique
outperformed static analysis alone by a factor of 7X and achieved approximately 99%
of the speedup obtained from manual parallel implementations and outperformed the
state-of-the-art in GPU SW-TLS by a factor of 1.45
Automatic skeleton-driven performance optimizations for transactional memory
The recent shift toward multi -core chips has pushed the burden of extracting performance to the programmer. In fact, programmers now have to be able to uncover more
coarse -grain parallelism with every new generation of processors, or the performance
of their applications will remain roughly the same or even degrade. Unfortunately,
parallel programming is still hard and error prone. This has driven the development of
many new parallel programming models that aim to make this process efficient.This thesis first combines the skeleton -based and transactional memory programming models in a new framework, called OpenSkel, in order to improve performance
and programmability of parallel applications. This framework provides a single skeleton that allows the implementation of transactional worklist applications. Skeleton or
pattern-based programming allows parallel programs to be expressed as specialized instances of generic communication and computation patterns. This leaves the programmer with only the implementation of the particular operations required to solve the
problem at hand. Thus, this programming approach simplifies parallel programming
by eliminating some of the major challenges of parallel programming, namely thread
communication, scheduling and orchestration. However, the application programmer
has still to correctly synchronize threads on data races. This commonly requires the
use of locks to guarantee atomic access to shared data. In particular, lock programming
is vulnerable to deadlocks and also limits coarse grain parallelism by blocking threads
that could be potentially executed in parallel.Transactional Memory (TM) thus emerges as an attractive alternative model to simplify parallel programming by removing this burden of handling data races explicitly.
This model allows programmers to write parallel code as transactions, which are then
guaranteed by the runtime system to execute atomically and in isolation regardless of
eventual data races. TM programming thus frees the application from deadlocks and
enables the exploitation of coarse grain parallelism when transactions do not conflict
very often. Nevertheless, thread management and orchestration are left for the application programmer. Fortunately, this can be naturally handled by a skeleton framework.
This fact makes the combination of skeleton -based and transactional programming a
natural step to improve programmability since these models complement each other.
In fact, this combination releases the application programmer from dealing with thread
management and data races, and also inherits the performance improvements of both
models. In addition to it, a skeleton framework is also amenable to skeleton - driven
iii
performance optimizations that exploits the application pattern and system information.This thesis thus also presents a set of pattern- oriented optimizations that are automatically selected and applied in a significant subset of transactional memory applications that shares a common pattern called worklist. These optimizations exploit the
knowledge about the worklist pattern and the TM nature of the applications to avoid
transaction conflicts, to prefetch data, to reduce contention etc. Using a novel autotuning mechanism, OpenSkel dynamically selects the most suitable set of these patternoriented performance optimizations for each application and adjusts them accordingly.
Experimental results on a subset of five applications from the STAMP benchmark suite
show that the proposed autotuning mechanism can achieve performance improvements
within 2 %, on average, of a static oracle for a 16 -core UMA (Uniform Memory Access) platform and surpasses it by 7% on average for a 32 -core NUMA (Non -Uniform
Memory Access) platform.Finally, this thesis also investigates skeleton -driven system- oriented performance
optimizations such as thread mapping and memory page allocation. In order to do
it, the OpenSkel system and also the autotuning mechanism are extended to accommodate these optimizations. The conducted experimental results on a subset of five
applications from the STAMP benchmark show that the OpenSkel framework with the
extended autotuning mechanism driving both pattern and system- oriented optimizations can achieve performance improvements of up to 88 %, with an average of 46 %,
over a baseline version for a 16 -core UMA platform and up to 162 %, with an average
of 91 %, for a 32 -core NUMA platform
Profile-driven parallelisation of sequential programs
Traditional parallelism detection in compilers is performed by means of static analysis
and more specifically data and control dependence analysis. The information that
is available at compile time, however, is inherently limited and therefore restricts the
parallelisation opportunities. Furthermore, applications written in C – which represent
the majority of today’s scientific, embedded and system software – utilise many lowlevel
features and an intricate programming style that forces the compiler to even more
conservative assumptions. Despite the numerous proposals to handle this uncertainty
at compile time using speculative optimisation and parallelisation, the software industry
still lacks any pragmatic approaches that extracts coarse-grain parallelism to exploit
the multiple processing units of modern commodity hardware.
This thesis introduces a novel approach for extracting and exploiting multiple forms
of coarse-grain parallelism from sequential applications written in C. We utilise profiling
information to overcome the limitations of static data and control-flow analysis
enabling more aggressive parallelisation. Profiling is performed using an instrumentation
scheme operating at the Intermediate Representation (Ir) level of the compiler.
In contrast to existing approaches that depend on low-level binary tools and debugging
information, Ir-profiling provides precise and direct correlation of profiling information
back to the Ir structures of the compiler. Additionally, our approach is orthogonal to
existing automatic parallelisation approaches and additional fine-grain parallelism may
be exploited.
We demonstrate the applicability and versatility of the proposed methodology using
two studies that target different forms of parallelism. First, we focus on the exploitation
of loop-level parallelism that is abundant in many scientific and embedded
applications. We evaluate our parallelisation strategy against the Nas and Spec Fp
benchmarks and two different multi-core platforms (a shared-memory Intel Xeon Smp
and a heterogeneous distributed-memory Ibm Cell blade). Empirical evaluation shows
that our approach not only yields significant improvements when compared with state-of-
the-art parallelising compilers, but comes close to and sometimes exceeds the performance
of manually parallelised codes. On average, our methodology achieves 96%
of the performance of the hand-tuned parallel benchmarks on the Intel Xeon platform,
and a significant speedup for the Cell platform. The second study, addresses
the problem of partially sequential loops, typically found in implementations of multimedia
codecs. We develop a more powerful whole-program representation based on the Program Dependence Graph (Pdg) that supports profiling, partitioning and codegeneration
for pipeline parallelism. In addition we demonstrate how this enhances
conventional pipeline parallelisation by incorporating support for multi-level loops and
pipeline stage replication in a uniform and automatic way. Experimental results using a
set of complex multimedia and stream processing benchmarks confirm the effectiveness
of the proposed methodology that yields speedups up to 4.7 on a eight-core Intel Xeon
machine
- …