27,405 research outputs found
Design of a WCET-Aware C Compiler
This paper presents techniques to tightly integrate worst-case execution time information into a compiler framework. Currently, a tight integration of WCET information into the compilation process is strongly desired, but only some ad-hoc approaches have been reported currently. Previous publications mainly used self-written WCET estimators with very limited functionality and preciseness during compilation. A very tight integration of a high quality industry-relevant WCET analyzer into a compiler was not yet achieved up to now. This work is the first to present techniques capable of achieving such a tight coupling between a compiler and the WCET analyzer aiT. This is done by automatically translating the assembly-like contents of the compiler\u27s low-level intermediate representation (LLIR) to aiT\u27s exchange format CRL2. Additionally, the results produced by the WCET analyzer are automatically collected and re-imported into the compiler infrastructure. The work described in this paper is smoothly integrated into a C compiler environment for the Infineon TriCore processor. It opens up new possibilities for the design of WCET-aware optimizations in the future.
The concepts for extending the compiler infrastructure are kept very general so that they are not limited to WCET information. Rather, it is possible to use our structures also for multi-objective optimization of e.g. best-case execution time (BCET) or energy dissipation
TDO-CIM: Transparent Detection and Offloading for Computation In-memory
Computation in-memory is a promising non-von Neumann approach aiming at
completely diminishing the data transfer to and from the memory subsystem.
Although a lot of architectures have been proposed, compiler support for such
architectures is still lagging behind. In this paper, we close this gap by
proposing an end-to-end compilation flow for in-memory computing based on the
LLVM compiler infrastructure. Starting from sequential code, our approach
automatically detects, optimizes, and offloads kernels suitable for in-memory
acceleration. We demonstrate our compiler tool-flow on the PolyBench/C
benchmark suite and evaluate the benefits of our proposed in-memory
architecture simulated in Gem5 by comparing it with a state-of-the-art von
Neumann architecture.Comment: Full version of DATE2020 publicatio
UPIR: Toward the Design of Unified Parallel Intermediate Representation for Parallel Programming Models
The complexity of heterogeneous computing architectures, as well as the
demand for productive and portable parallel application development, have
driven the evolution of parallel programming models to become more
comprehensive and complex than before. Enhancing the conventional compilation
technologies and software infrastructure to be parallelism-aware has become one
of the main goals of recent compiler development. In this paper, we propose the
design of unified parallel intermediate representation (UPIR) for multiple
parallel programming models and for enabling unified compiler transformation
for the models. UPIR specifies three commonly used parallelism patterns (SPMD,
data and task parallelism), data attributes and explicit data movement and
memory management, and synchronization operations used in parallel programming.
We demonstrate UPIR via a prototype implementation in the ROSE compiler for
unifying IR for both OpenMP and OpenACC and in both C/C++ and Fortran, for
unifying the transformation that lowers both OpenMP and OpenACC code to LLVM
runtime, and for exporting UPIR to LLVM MLIR dialect.Comment: Typos corrected. Format update
An LLVM Instrumentation Plug-in for Score-P
Reducing application runtime, scaling parallel applications to higher numbers
of processes/threads, and porting applications to new hardware architectures
are tasks necessary in the software development process. Therefore, developers
have to investigate and understand application runtime behavior. Tools such as
monitoring infrastructures that capture performance relevant data during
application execution assist in this task. The measured data forms the basis
for identifying bottlenecks and optimizing the code. Monitoring infrastructures
need mechanisms to record application activities in order to conduct
measurements. Automatic instrumentation of the source code is the preferred
method in most application scenarios. We introduce a plug-in for the LLVM
infrastructure that enables automatic source code instrumentation at
compile-time. In contrast to available instrumentation mechanisms in
LLVM/Clang, our plug-in can selectively include/exclude individual application
functions. This enables developers to fine-tune the measurement to the required
level of detail while avoiding large runtime overheads due to excessive
instrumentation.Comment: 8 page
- …