235 research outputs found
Coarse-grained reconfigurable array architectures
Coarse-Grained Reconfigurable Array (CGRA) architectures accelerate the same inner loops that benefit from the high ILP support in VLIW architectures. By executing non-loop code on other cores, however, CGRAs can focus on such loops to execute them more efficiently. This chapter discusses the basic principles of CGRAs, and the wide range of design options available to a CGRA designer, covering a large number of existing CGRA designs. The impact of different options on flexibility, performance, and power-efficiency is discussed, as well as the need for compiler support. The ADRES CGRA design template is studied in more detail as a use case to illustrate the need for design space exploration, for compiler support and for the manual fine-tuning of source code
Compiler Discovered Dynamic Scheduling of Irregular Code in High-Level Synthesis
Dynamically scheduled high-level synthesis (HLS) achieves higher throughput
than static HLS for codes with unpredictable memory accesses and control flow.
However, excessive dataflow scheduling results in circuits that use more
resources and have a slower critical path, even when only a part of the circuit
exhibits dynamic behavior. Recent work has shown that marking parts of a
dataflow circuit for static scheduling can save resources and improve
performance (hybrid scheduling), but the dynamic part of the circuit still
bottlenecks the critical path. We propose instead to selectively introduce
dynamic scheduling into static HLS. This paper presents an algorithm for
identifying code regions amenable to dynamic scheduling and shows a methodology
for introducing dynamically scheduled basic blocks, loops, and memory
operations into static HLS. Our algorithm is informed by modulo-scheduling and
can be integrated into any modulo-scheduled HLS tool. On a set of ten
benchmarks, we show that our approach achieves on average an up to 3.7
and 3 speedup against dynamic and hybrid scheduling, respectively, with
an area overhead of 1.3 and frequency degradation of 0.74 when
compared to static HLS.Comment: To appear in the 33rd International Conference on Field-Programmable
Logic and Applications (2023
Clustered VLIW architecture based on queue register files
Institute for Computing Systems ArchitectureInstruction-level parallelism (ILP) is a set of hardware and software techniques that allow parallel execution of machine operations. Superscalar architectures rely most heavily upon hardware schemes to identify parallelism among operations. Although successful in terms of performance, the hardware complexity involved might limit the scalability of this model. VLIW architectures use a different approach to exploit ILP. In this case all data dependence analyses and scheduling of operations are performed at compile time, resulting in a simpler hardware organization. This allows the inclusion of a larger number of functional units (FUs) into a single chip. IN spite of this relative simplification, the scalability of VLIW architectures can be constrained by the size and number of ports of the register file. VLIW machines often use software pipelining techniques to improve the execution of loop structures, which can increase the register pressure. Furthermore, the access time of a register file can be compromised by the number of ports, causing a negative impact on the machine cycle time. For these reasons we understand that the benefits of having parallel FUs, which have motivated the investigation of alternative machine designs.
This thesis presents a scalar VLIW architecture comprising clusters of FUs and private register files. Register files organised as queue structures are used as a mechanism for inter-cluster communication, allowing the enforcement of fixed latency in the process. This scheme presents better possibilities in terms of scalability as the size of the individual register files is not determined by the total number of FUs, suggesting that the silicon area may grow only linearly with respect to the total number of FUs. However, the effectiveness of such an organization depends on the efficiency of the code partitioning strategy. We have developed an algorithm for a clustered VLIW architecture integrating both software pipelining and code partitioning in a a single procedure. Experimental results show it may allow performance levels close to an unclustered machine without communication restraints. Finally, we have developed silicon area and cycle time models to quantify the scalability of performance and cost for this class of architecture
Datapath and memory co-optimization for FPGA-based computation
With the large resource densities available on modern FPGAs it is often the available
memory bandwidth that limits the parallelism (and therefore performance) that can be
achieved. For this reason the focus of this thesis is the development of an integrated
scheduling and memory optimisation methodology to allow high levels of parallelism to be
exploited in FPGA based designs.
A manual translation from C to hardware is first investigated as a case study,
exposing a number of potential optimisation techniques that have not been exploited in
existing work. An existing outer loop pipelining approach, originally developed for VLIW
processors, is extended and adapted for application to FPGAs. The outer loop pipelining
methodology is first developed to use a fixed memory subsystem design and then extended
to automate the optimisation of the memory subsystem. This approach allocates arrays
to physical memories and selects the set of data reuse structures to implement to match
the available and required memory bandwidths as the pipelining search progresses. The
final extension to this work is to include the partitioning of data from a single array across
multiple physical memories, increasing the number of memory ports through which data
my be accessed. The facility for loop unrolling is also added to increase the potential for
parallelism and exploit the additional bandwidth that partitioning can provide.
We describe our approach based on formal methodologies and present the results
achieved when these methods are applied to a number of benchmarks. These results show
the advantages of both extending pipelining to levels above the innermost loop and the
co-optimisation of the datapath and memory subsystem
Software caching techniques and hardware optimizations for on-chip local memories
Despite the fact that the most viable L1 memories in processors are caches,
on-chip local memories have been a great topic of consideration lately. Local
memories are an interesting design option due to their many benefits: less
area occupancy, reduced energy consumption and fast and constant access time.
These benefits are especially interesting for the design of modern multicore processors
since power and latency are important assets in computer architecture
today. Also, local memories do not generate coherency traffic which is important
for the scalability of the multicore systems.
Unfortunately, local memories have not been well accepted in modern processors
yet, mainly due to their poor programmability. Systems with on-chip local
memories do not have hardware support for transparent data transfers between
local and global memories, and thus ease of programming is one of the main
impediments for the broad acceptance of those systems. This thesis addresses
software and hardware optimizations regarding the programmability, and the
usage of the on-chip local memories in the context of both single-core and multicore
systems.
Software optimizations are related to the software caching techniques. Software
cache is a robust approach to provide the user with a transparent view
of the memory architecture; but this software approach can suffer from poor
performance. In this thesis, we start optimizing traditional software cache by
proposing a hierarchical, hybrid software-cache architecture. Afterwards, we develop
few optimizations in order to speedup our hybrid software cache as much
as possible. As the result of the software optimizations we obtain that our hybrid
software cache performs from 4 to 10 times faster than traditional software
cache on a set of NAS parallel benchmarks.
We do not stop with software caching. We cover some other aspects of the
architectures with on-chip local memories, such as the quality of the generated
code and its correspondence with the quality of the buffer management in local
memories, in order to improve performance of these architectures. Therefore,
we run our research till we reach the limit in software and start proposing optimizations
on the hardware level. Two hardware proposals are presented in this
thesis. One is about relaxing alignment constraints imposed in the architectures
with on-chip local memories and the other proposal is about accelerating the
management of local memories by providing hardware support for the majority
of actions performed in our software cache.Malgrat les memòries cau encara son el component basic pel disseny del subsistema de memòria, les memòries locals han esdevingut una alternativa degut a les seves característiques pel que fa a l’ocupació d’àrea, el seu consum energètic i el seu rendiment amb un temps d’accés ràpid i constant. Aquestes característiques son d’especial interès quan les properes arquitectures multi-nucli estan limitades pel consum de potencia i la latència del subsistema de memòria.Les memòries locals pateixen de limitacions respecte la complexitat en la seva programació, fet que dificulta la seva introducció en arquitectures multi-nucli, tot i els avantatges esmentats anteriorment. Aquesta tesi presenta un seguit de solucions basades en programari i maquinari específicament dissenyat per resoldre aquestes limitacions.Les optimitzacions del programari estan basades amb tècniques d'emmagatzematge de memòria cau suportades per llibreries especifiques. La memòria cau per programari és un sòlid mètode per proporcionar a l'usuari una visió transparent de l'arquitectura, però aquest enfocament pot patir d'un rendiment deficient. En aquesta tesi, es proposa una estructura jeràrquica i híbrida. Posteriorment, desenvolupem optimitzacions per tal d'accelerar l’execució del programari que suporta el disseny de la memòria cau. Com a resultat de les optimitzacions realitzades, obtenim que el nostre disseny híbrid es comporta de 4 a 10 vegades més ràpid que una implementació tradicional de memòria cau sobre un conjunt d’aplicacions de referencia, com son els “NAS parallel benchmarks”.El treball de tesi inclou altres aspectes de les arquitectures amb memòries locals, com ara la qualitat del codi generat i la seva correspondència amb la qualitat de la gestió de memòria intermèdia en les memòries locals, per tal de millorar el rendiment d'aquestes arquitectures. La tesi desenvolupa propostes basades estrictament en el disseny de nou maquinari per tal de millorar el rendiment de les memòries locals quan ja no es possible realitzar mes optimitzacions en el programari. En particular, la tesi presenta dues propostes de maquinari: una relaxa les restriccions imposades per les memòries locals respecte l’alineament de dades, l’altra introdueix maquinari específic per accelerar les operacions mes usuals sobre les memòries locals
Generation of Application Specific Hardware Extensions for Hybrid Architectures: The Development of PIRANHA - A GCC Plugin for High-Level-Synthesis
Architectures combining a field programmable gate array (FPGA) and a general-purpose processor on a single chip became increasingly popular in recent years. On the one hand, such hybrid architectures facilitate the use of application specific hardware accelerators that improve the performance of the software on the host processor. On the other hand, it obliges system designers to handle the whole process of hardware/software co-design. The complexity of this process is still one of the main reasons, that hinders the widespread use of hybrid architectures. Thus, an automated process that aids programmers with the hardware/software partitioning and the generation of application specific accelerators is an important issue. The method presented in this thesis neither requires restrictions of the used high-level-language nor special source code annotations. Usually, this is an entry barrier for programmers without deeper understanding of the underlying hardware platform.
This thesis introduces a seamless programming flow that allows generating hardware accelerators for unrestricted, legacy C code. The implementation consists of a GCC plugin that automatically identifies application hot-spots and generates hardware accelerators accordingly. Apart from the accelerator implementation in a hardware description language, the compiler plugin provides the generation of a host processor interfaces and, if necessary, a prototypical integration with the host operating system. An evaluation with typical embedded applications shows general benefits of the approach, but also reveals limiting factors that hamper possible performance improvements
An automated OpenCL FPGA compilation framework targeting a configurable, VLIW chip multiprocessor
Modern system-on-chips augment their baseline CPU with coprocessors and accelerators to increase overall computational capacity and power efficiency, and thus have evolved into heterogeneous systems. Several languages have been developed to enable this paradigm shift, including CUDA and OpenCL. This thesis discusses a unified compilation environment to enable heterogeneous system design through the use of OpenCL and a customised VLIW chip multiprocessor (CMP) architecture, known as the LE1. An LLVM compilation framework was researched and a prototype developed to enable the execution of OpenCL applications on the LE1 CPU. The framework fully automates the compilation flow and supports work-item coalescing to better utilise the CPU cores and alleviate the effects of thread divergence. This thesis discusses in detail both the software stack and target hardware architecture and evaluates the scalability of the proposed framework on a highly precise cycle-accurate simulator. This is achieved through the execution of 12 benchmarks across 240 different machine configurations, as well as further results utilising an incomplete development branch of the compiler. It is shown that the problems generally scale well with the LE1 architecture, up to eight cores, when the memory system becomes a serious bottleneck. Results demonstrate superlinear performance on certain benchmarks (x9 for the bitonic sort benchmark with 8 dual-issue cores) with further improvements from compiler optimisations (x14 for bitonic with the same configuration
Survey of the Itanium architecture from a programmer's perspective
Journal ArticleThe Itanium family of processors represents Intel;s foray into the world of Explicitly Parallel Instruction Computing and 64-bit system design. This survey contains an introduction to the Itanium architecture and instruction set, as well as some of the available implementations. Taking a programmer's perspective, we have attempted to distill the relevant information from a variety of sources, including the Intel Itanium architecture documentation
Optimal Global Instruction Scheduling for the Itanium® Processor Architecture
On the Itanium 2 processor, effective global instruction scheduling is crucial to high performance. At the same time, it poses a challenge to the compiler: This code generation subtask involves strongly interdependent decisions and complex trade-offs that are difficult to cope with for heuristics. We tackle this NP-complete problem with integer linear programming (ILP), a search-based method that yields provably optimal results. This promises faster code as well as insights into the potential of the architecture. Our ILP model comprises global code motion with compensation copies, predication, and Itanium-specific features like control/data speculation. In integer linear programming, well-structured models are the key to acceptable solution times. The feasible solutions of an ILP are represented by integer points inside a polytope. If all vertices of this polytope are integral, then the ILP can be solved in polynomial time. We define two subproblems of global scheduling in which some constraint classes are omitted and show that the corresponding two subpolytopes of our ILP model are integral and polynomial sized. This substantiates that the found model is of high efficiency, which is also confirmed by the reasonable solution times. The ILP formulation is extended by further transformations like cyclic code motion, which moves instructions upwards out of a loop, circularly in the opposite direction of the loop backedges. Since the architecture requires instructions to be encoded in fixed-sized bundles of three, a bundler is developed that computes bundle sequences of minimal size by means of precomputed results and dynamic programming. Experiments have been conducted with a postpass tool that implements the ILP scheduler. It parses assembly procedures generated by Intel�s Itanium compiler and reschedules them as a whole. Using this tool, we optimize a selection of hot functions from the SPECint 2000 benchmark. The results show a significant speedup over the original code.Globale Instruktionsanordnung hat beim Itanium-2-Prozessor großen
Einfluß auf die Leistung und stellt dabei gleichzeitig eine Herausforderung
für den Compiler dar: Sie ist mit zahlreichen komplexen, wechselseitig
voneinander abhängigen Entscheidungen verbunden, die für Heuristiken
nur schwer zu beherrschen sind.Wir lösen diesesNP-vollständige
Problem mit ganzzahliger linearer Programmierung (ILP), einer suchbasierten
Methode mit beweisbar optimalen Ergebnissen. Das ermöglicht
neben schnellerem Code auch Einblicke in das Potential der Itanium-
Prozessorarchitektur. Unser ILP-Modell umfaßt globale Codeverschiebungen
mit Kompensationscode, Prädikation und Itanium-spezifische
Techniken wie Kontroll- und Datenspekulation.
Bei ganzzahliger linearer Programmierung sind wohlstrukturierte
Modelle der Schlüssel zu akzeptablen Lösungszeiten. Die zulässigen Lösungen
eines ILPs werden durch ganzzahlige Punkte innerhalb eines
Polytops repräsentiert. Sind die Eckpunkte dieses Polytops ganzzahlig,
kann das ILP in Polynomialzeit gelöst werden. Wir definieren zwei Teilprobleme
globaler Instruktionsanordnung durch Auslassung bestimmter
Klassen von Nebenbedingungen und beweisen, daß die korrespondierenden
Teilpolytope unseres ILP-Modells ganzzahlig und von polynomieller
Größe sind. Dies untermauert die hohe Effizienz des gefundenen Modells,
die auch durch moderate Lösungszeiten bestätigt wird.
Das ILP-Modell wird um weitere Transformationen wie zyklische Codeverschiebung
erweitert; letztere bezeichnet das Verschieben von Befehlen
aufwärts aus einer Schleife heraus, in Gegenrichtung ihrer Rückwärtskanten.
Da die Architektur eine Kodierung der Befehle in Dreierbündeln
fester Größe vorschreibt, wird ein Bundler entwickelt, der Bündelsequenzen
minimaler Länge mit Hilfe vorberechneter Teilergebnisse und dynamischer
Programmierung erzeugt.
Für die Experimente wurde ein Postpassoptimierer erstellt. Er liest
von Intels Itanium-Compiler erzeugte Assemblerroutinen ein und ordnet
die enthaltenen Instruktionen mit Hilfe der ILP-Methode neu an. Angewandt
auf eine Auswahl von Funktionen aus dem Benchmark SPECint
2000 erreicht der Optimierer eine signifikante Beschleunigung gegenüber
dem Originalcode
Optimal Global Instruction Scheduling for the Itanium® Processor Architecture
On the Itanium 2 processor, effective global instruction scheduling is crucial to high performance. At the same time, it poses a challenge to the compiler: This code generation subtask involves strongly interdependent decisions and complex trade-offs that are difficult to cope with for heuristics. We tackle this NP-complete problem with integer linear programming (ILP), a search-based method that yields provably optimal results. This promises faster code as well as insights into the potential of the architecture. Our ILP model comprises global code motion with compensation copies, predication, and Itanium-specific features like control/data speculation. In integer linear programming, well-structured models are the key to acceptable solution times. The feasible solutions of an ILP are represented by integer points inside a polytope. If all vertices of this polytope are integral, then the ILP can be solved in polynomial time. We define two subproblems of global scheduling in which some constraint classes are omitted and show that the corresponding two subpolytopes of our ILP model are integral and polynomial sized. This substantiates that the found model is of high efficiency, which is also confirmed by the reasonable solution times. The ILP formulation is extended by further transformations like cyclic code motion, which moves instructions upwards out of a loop, circularly in the opposite direction of the loop backedges. Since the architecture requires instructions to be encoded in fixed-sized bundles of three, a bundler is developed that computes bundle sequences of minimal size by means of precomputed results and dynamic programming. Experiments have been conducted with a postpass tool that implements the ILP scheduler. It parses assembly procedures generated by Intel�s Itanium compiler and reschedules them as a whole. Using this tool, we optimize a selection of hot functions from the SPECint 2000 benchmark. The results show a significant speedup over the original code.Globale Instruktionsanordnung hat beim Itanium-2-Prozessor großen
Einfluß auf die Leistung und stellt dabei gleichzeitig eine Herausforderung
für den Compiler dar: Sie ist mit zahlreichen komplexen, wechselseitig
voneinander abhängigen Entscheidungen verbunden, die für Heuristiken
nur schwer zu beherrschen sind.Wir lösen diesesNP-vollständige
Problem mit ganzzahliger linearer Programmierung (ILP), einer suchbasierten
Methode mit beweisbar optimalen Ergebnissen. Das ermöglicht
neben schnellerem Code auch Einblicke in das Potential der Itanium-
Prozessorarchitektur. Unser ILP-Modell umfaßt globale Codeverschiebungen
mit Kompensationscode, Prädikation und Itanium-spezifische
Techniken wie Kontroll- und Datenspekulation.
Bei ganzzahliger linearer Programmierung sind wohlstrukturierte
Modelle der Schlüssel zu akzeptablen Lösungszeiten. Die zulässigen Lösungen
eines ILPs werden durch ganzzahlige Punkte innerhalb eines
Polytops repräsentiert. Sind die Eckpunkte dieses Polytops ganzzahlig,
kann das ILP in Polynomialzeit gelöst werden. Wir definieren zwei Teilprobleme
globaler Instruktionsanordnung durch Auslassung bestimmter
Klassen von Nebenbedingungen und beweisen, daß die korrespondierenden
Teilpolytope unseres ILP-Modells ganzzahlig und von polynomieller
Größe sind. Dies untermauert die hohe Effizienz des gefundenen Modells,
die auch durch moderate Lösungszeiten bestätigt wird.
Das ILP-Modell wird um weitere Transformationen wie zyklische Codeverschiebung
erweitert; letztere bezeichnet das Verschieben von Befehlen
aufwärts aus einer Schleife heraus, in Gegenrichtung ihrer Rückwärtskanten.
Da die Architektur eine Kodierung der Befehle in Dreierbündeln
fester Größe vorschreibt, wird ein Bundler entwickelt, der Bündelsequenzen
minimaler Länge mit Hilfe vorberechneter Teilergebnisse und dynamischer
Programmierung erzeugt.
Für die Experimente wurde ein Postpassoptimierer erstellt. Er liest
von Intels Itanium-Compiler erzeugte Assemblerroutinen ein und ordnet
die enthaltenen Instruktionen mit Hilfe der ILP-Methode neu an. Angewandt
auf eine Auswahl von Funktionen aus dem Benchmark SPECint
2000 erreicht der Optimierer eine signifikante Beschleunigung gegenüber
dem Originalcode
- …