54 research outputs found
Automated detection of structured coarse-grained parallelism in sequential legacy applications
The efficient execution of sequential legacy applications on modern, parallel computer
architectures is one of today’s most pressing problems. Automatic parallelization
has been investigated as a potential solution for several decades but its success
generally remains restricted to small niches of regular, array-based applications.
This thesis investigates two techniques that have the potential to overcome these
limitations.
Beginning at the lowest level of abstraction, the binary executable, it presents
a study of the limits of Dynamic Binary Parallelization (Dbp), a recently proposed
technique that takes advantage of an underlying multicore host to transparently
parallelize a sequential binary executable. While still in its infancy, Dbp has received
broad interest within the research community. This thesis seeks to gain an
understanding of the factors contributing to the limits of Dbp and the costs and
overheads of its implementation. An extensive evaluation using a parameterizable
Dbp system targeting a Cmp with light-weight architectural Tls support is presented.
The results show that there is room for a significant reduction of up to 54%
in the number of instructions on the critical paths of legacy Spec Cpu2006 benchmarks,
but that it is much harder to translate these savings into actual performance
improvements, with a realistic hardware-supported implementation achieving a
speedup of 1.09 on average.
While automatically parallelizing compilers have traditionally focused on data
parallelism, additional parallelism exists in a plethora of other shapes such as task
farms, divide & conquer, map/reduce and many more. These algorithmic skeletons,
i.e. high-level abstractions for commonly used patterns of parallel computation,
differ substantially from data parallel loops. Unfortunately, algorithmic skeletons
are largely informal programming abstractions and are lacking a formal characterization
in terms of established compiler concepts. This thesis develops compiler-friendly
characterizations of popular algorithmic skeletons using a novel notion of
commutativity based on liveness. A hybrid static/dynamic analysis framework for
the context-sensitive detection of skeletons in legacy code that overcomes limitations
of static analysis by complementing it with profiling information is described.
A proof-of-concept implementation of this framework in the Llvm compiler infrastructure
is evaluated against Spec Cpu2006 benchmarks for the detection of a typical skeleton. The results illustrate that skeletons are often context-sensitive in
nature.
Like the two approaches presented in this thesis, many dynamic parallelization
techniques exploit the fact that some statically detected data and control
flow dependences do not manifest themselves in every possible program execution
(may-dependences) but occur only infrequently, e.g. for some corner cases, or not
at all for any legal program input. While the effectiveness of dynamic parallelization
techniques critically depends on the absence of such dependences, not much
is known about their nature. This thesis presents an empirical analysis and characterization
of the variability of both data dependences and control flow across
program runs. The cBench benchmark suite is run with 100 randomly chosen
input data sets to generate whole-program control and data flow graphs (Cdfgs)
for each run, which are then compared to obtain a measure of the variance in the
observed control and data flow. The results show that, on average, the cumulative
profile information gathered with at least 55, and up to 100, different input data
sets is needed to achieve full coverage of the data flow observed across all runs.
For control flow, the figure stands at 46 and 100 data sets, respectively. This suggests
that profile-guided parallelization needs to be applied with utmost care, as
misclassification of sequential loops as parallel was observed even when up to 94
input data sets are used
Recommended from our members
Guided Automatic Binary Parallelisation
For decades, the software industry has amassed a vast repository of pre-compiled libraries and executables which are still valuable and actively in use. However, for a significant fraction of these binaries, most of the source code is absent or is written in old languages, making it practically impossible to recompile them for new generations of hardware. As the number of cores in chip multi-processors (CMPs) continue to scale, the performance of this legacy software becomes increasingly sub-optimal. Rewriting new optimised and parallel software would be a time-consuming and expensive task. Without source code, existing automatic performance enhancing and parallelisation techniques are not applicable for legacy software or parts of new applications linked with legacy libraries.
In this dissertation, three tools are presented to address the challenge of optimising legacy binaries. The first, GBR (Guided Binary Recompilation), is a tool that recompiles stripped application binaries without the need for the source code or relocation information. GBR performs static binary analysis to determine how recompilation should be undertaken, and produces a domain-specific hint program. This hint program is loaded and interpreted by the GBR dynamic runtime, which is built on top of the open-source dynamic binary translator, DynamoRIO. In this manner, complicated recompilation of the target binary is carried out to achieve optimised execution on a real system. The problem of limited dataflow and type information is addressed through cooperation between the hint program and JIT optimisation. The utility of GBR is demonstrated by software prefetch and vectorisation optimisations to achieve performance improvements compared to their original native execution.
The second tool is called BEEP (Binary Emulator for Estimating Parallelism), an extension to GBR for binary instrumentation.
BEEP is used to identify potential thread-level parallelism through static binary analysis and binary instrumentation.
BEEP performs preliminary static analysis on binaries and encodes all statically-undecided questions into a hint program.
The hint program is interpreted by GBR so that on-demand binary instrumentation codes are inserted to answer the questions from runtime information.
BEEP incorporates a few parallel cost models to evaluate identified parallelism under different parallelisation paradigms.
The third tool is named GABP (Guided Automatic Binary Parallelisation), an extension to GBR for parallelisation. GABP focuses on loops from sequential application binaries and automatically extracts thread-level parallelism from them on-the-fly, under the direction of the hint program, for efficient parallel execution. It employs a range of runtime schemes, such as thread-level speculation and synchronisation, to handle runtime data dependences. GABP achieves a geometric mean of speedup of 1.91x on binaries from SPEC CPU2006 on a real x86-64 eight-core system compared to native sequential execution. Performance is obtained for SPEC CPU2006 executables compiled from a variety of source languages and by different compilers.St John's Benefactor Scholarship
ARM Sponsorshi
Improving the Perfomance of a Pointer-Based, Speculative Parallelization Scheme
La paralelización especulativa es una técnica que intenta extraer paralelismo de los
bucles no paralelizables en tiempo de compilación. La idea subyacente es ejecutar el
código de forma optimista mientras un subsistema comprueba que no se viole la semántica
secuencial. Han sido muchos los trabajos realizados en este campo, sin embargo, no
conocemos ninguno que fuese capaz de paralelizar aplicaciones que utilizasen aritmética
de punteros. En un trabajo previo del autor de esta memoria, se desarrolló una librería
software capaz de soportar este tipo de aplicaciones. No obstante, el software desarrollado
sufría de una limitación muy importante: el tiempo de ejecución de las versiones paralelas
era mayor que el de las versiones secuenciales. A lo largo de este Trabajo de Fin de
Máster, se aborda esta limitación, encontrando y corrigiendo las razones de esta falta de
eficiencia, y situando el trabajo realizado en perspectiva, dentro de las contribuciones mundiales en este ámbito. Los resultados experimentales obtenidos con aplicaciones reales nos permiten afirmar
que estas limitaciones han sido solventadas, ya que obtenemos speedups de hasta de un
1.61 . Así, con la nueva versión de la librería se han llegado a obtener mejoras de hasta
el 421.4% respecto al tiempo de ejecución generado por la versión original de la librería
especulativa.InformáticaMáster en Investigación en Tecnologías de la Información y las Comunicacione
Recommended from our members
Elixir: synthesis of parallel irregular algorithms
Algorithms in new application areas like machine learning and data analytics usually operate on unstructured sparse graphs. Writing efficient parallel code to implement these algorithms is very challenging for a number of reasons.
First, there may be many algorithms to solve a problem and each algorithm may have many implementations. Second, synchronization, which is necessary for correct parallel execution, introduces potential problems such as data-races and deadlocks. These issues interact in subtle ways, making the best solution dependent both on the parallel platform and on properties of the input graph. Consequently, implementing and selecting the best parallel solution can be a daunting task for non-experts, since we have few performance models for predicting the performance of parallel sparse graph programs on parallel hardware.
This dissertation presents a synthesis methodology and a system, Elixir, that addresses these problems by (i) allowing programmers to specify solutions at a high level of abstraction, and (ii) generating many parallel implementations automatically and using search to find the best one. An Elixir specification consists of a set of operators capturing the main algorithm logic and a schedule specifying how to efficiently apply the operators. Elixir employs sophisticated automated reasoning to merge these two components, and uses techniques based on automated planning to insert synchronization and synthesize efficient parallel code.
Experimental evaluation of our approach demonstrates that the performance of the Elixir generated code is competitive to, and can even outperform, hand-optimized code written by expert programmers for many interesting graph benchmarks.Computer Science
Un framework pour l'exécution efficace d'applications sur GPU et CPU+GPU
Technological limitations faced by the semi-conductor manufacturers in the early 2000's restricted the increase in performance of the sequential computation units. Nowadays, the trend is to increase the number of processor cores per socket and to progressively use the GPU cards for highly parallel computations. Complexity of the recent architectures makes it difficult to statically predict the performance of a program. We describe a reliable and accurate parallel loop nests execution time prediction method on GPUs based on three stages: static code generation, offline profiling, and online prediction. In addition, we present two techniques to fully exploit the computing resources at disposal on a system. The first technique consists in jointly using CPU and GPU for executing a code. In order to achieve higher performance, it is mandatory to consider load balance, in particular by predicting execution time. The runtime uses the profiling results and the scheduler computes the execution times and adjusts the load distributed to the processors. The second technique, puts CPU and GPU in a competition: instances of the considered code are simultaneously executed on CPU and GPU. The winner of the competition notifies its completion to the other instance, implying the termination of the latter.Les verrous technologiques rencontrés par les fabricants de semi-conducteurs au début des années deux-mille ont abrogé la flambée des performances des unités de calculs séquentielles. La tendance actuelle est à la multiplication du nombre de cœurs de processeur par socket et à l'utilisation progressive des cartes GPU pour des calculs hautement parallèles. La complexité des architectures récentes rend difficile l'estimation statique des performances d'un programme. Nous décrivons une méthode fiable et précise de prédiction du temps d'exécution de nids de boucles parallèles sur GPU basée sur trois étapes : la génération de code, le profilage offline et la prédiction online. En outre, nous présentons deux techniques pour exploiter l'ensemble des ressources disponibles d'un système pour la performance. La première consiste en l'utilisation conjointe des CPUs et GPUs pour l'exécution d'un code. Afin de préserver les performances il est nécessaire de considérer la répartition de charge, notamment en prédisant les temps d'exécution. Le runtime utilise les résultats du profilage et un ordonnanceur calcule des temps d'exécution et ajuste la charge distribuée aux processeurs. La seconde technique présentée met le CPU et le GPU en compétition : des instances du code cible sont exécutées simultanément sur CPU et GPU. Le vainqueur de la compétition notifie sa complétion à l'autre instance, impliquant son arrêt
Design and evaluation of a Thread-Level Speculation runtime library
En los próximos años es más que probable que máquinas con cientos o incluso miles de procesadores sean algo habitual. Para aprovechar estas máquinas, y debido a la dificultad de programar de forma paralela, sería deseable disponer de sistemas de compilación o ejecución que extraigan todo el paralelismo posible de las aplicaciones existentes. Así en los últimos tiempos se han propuesto multitud de técnicas paralelas. Sin embargo, la mayoría de ellas se centran en códigos simples, es decir, sin dependencias entre sus instrucciones. La paralelización especulativa surge como una solución para estos códigos complejos, posibilitando la ejecución de cualquier tipo de códigos, con o sin dependencias. Esta técnica asume de forma optimista que la ejecución paralela de cualquier tipo de código no de lugar a errores y, por lo tanto, necesitan de un mecanismo que detecte cualquier tipo de colisión. Para ello, constan de un monitor responsable que comprueba constantemente que la ejecución no
sea errónea, asegurando que los resultados obtenidos de forma paralela sean similares a los de cualquier ejecución secuencial. En caso de que la ejecución fuese errónea los threads se detendrían y reiniciarían su ejecución para asegurar que la ejecución sigue la semántica secuencial.
Nuestra contribución en este campo incluye (1) una nueva librería de ejecución especulativa fácil de utilizar; (2) nuevas propuestas que permiten reducir de forma significativa el número de accesos requeridos en las peraciones
especulativas, así como consejos para reducir la memoria a utilizar; (3) propuestas para mejorar los métodos de scheduling centradas en la gestión dinámica de los bloques de iteraciones utilizados en las ejecuciones especulativas; (4) una solución híbrida que utiliza memoria transaccional para implementar las secciones críticas de una librería de paralelización especulativa; y (5) un análisis de las técnicas especulativas en uno de los dispositivos más vanguardistas del momento, los coprocesadores Intel Xeon Phi.
Como hemos podido comprobar, la paralelización especulativa es un campo de investigación activo. Nuestros resultados demuestran que esta técnica permite obtener mejoras de rendimiento en un gran número de aplicaciones. Así, esperamos que este trabajo contribuya a facilitar el uso de soluciones especulativas en compiladores comerciales y/o modelos de programación paralela de memoria compartida.Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos
Profile-directed specialisation of custom floating-point hardware
We present a methodology for generating
floating-point arithmetic hardware
designs which are, for suitable applications, much reduced in size, while still
retaining performance and IEEE-754 compliance. Our system uses three
key parts: a profiling tool, a set of customisable
floating-point units and a
selection of system integration methods.
We use a profiling tool for
floating-point behaviour to identify arithmetic
operations where fundamental elements of IEEE-754
floating-point may be
compromised, without generating erroneous results in the common case.
In the uncommon case, we use simple detection logic to determine when
operands lie outside the range of capabilities of the optimised hardware.
Out-of-range operations are handled by a separate, fully capable,
floatingpoint
implementation, either on-chip or by returning calculations to a host
processor. We present methods of system integration to achieve this errorcorrection.
Thus the system suffers no compromise in IEEE-754 compliance,
even when the synthesised hardware would generate erroneous results.
In particular, we identify from input operands the shift amounts required
for input operand alignment and post-operation normalisation. For operations
where these are small, we synthesise hardware with reduced-size
barrel-shifters. We also propose optimisations to take advantage of other
profile-exposed behaviours, including removing the hardware required to
swap operands in a floating-point adder or subtractor, and reducing the
exponent range to fit observed values.
We present profiling results for a range of applications, including a selection
of computational science programs, Spec FP 95 benchmarks and the
FFMPEG media processing tool, indicating which would be amenable to
our method. Selected applications which demonstrate potential for optimisation
are then taken through to a hardware implementation. We show up
to a 45% decrease in hardware size for a
floating-point datapath, with a
correctable error-rate of less then 3%, even with non-profiled datasets
Analysis and transformation of legacy code
Hardware evolves faster than software. While a hardware system might need replacement
every one to five years, the average lifespan of a software system is a decade,
with some instances living up to several decades. Inevitably, code outlives the platform
it was developed for and may become legacy: development of the software stops,
but maintenance has to continue to keep up with the evolving ecosystem. No new features
are added, but the software is still used to fulfil its original purpose. Even in the
cases where it is still functional (which discourages its replacement), legacy code is
inefficient, costly to maintain, and a risk to security.
This thesis proposes methods to leverage the expertise put in the development of
legacy code and to extend its useful lifespan, rather than to throw it away. A novel
methodology is proposed, for automatically exploiting platform specific optimisations
when retargeting a program to another platform. The key idea is to leverage the optimisation
information embedded in vector processing intrinsic functions. The performance
of the resulting code is shown to be close to the performance of manually
retargeted programs, however with the human labour removed.
Building on top of that, the question of discovering optimisation information when
there are no hints in the form of intrinsics or annotations is investigated. This thesis
postulates that such information can potentially be extracted from profiling the data
flow during executions of the program. A context-aware data dependence profiling
system is described, detailing previously overlooked aspects in related research. The
system is shown to be essential in surpassing the information that can be inferred statically,
in particular about loop iterators.
Loop iterators are the controlling part of a loop. This thesis describes and evaluates
a system for extracting the loop iterators in a program. It is found to significantly
outperform previously known techniques and further increases the amount of information
about the structure of a program that is available to a compiler. Combining this
system with data dependence profiling improves its results even more. Loop iterator
recognition enables other code modernising techniques, like source code rejuvenation
and commutativity analysis. The former increases the use of idiomatic code and as
a result increases the maintainability of the program. The latter can potentially drive
parallelisation and thus dramatically improve runtime performance
- …