23,172 research outputs found
Mainstream parallel array programming on cell
We present the E] compiler and runtime library for the ‘F’ subset of
the Fortran 95 programming language. ‘F’ provides first-class support for arrays,
allowing E] to implicitly evaluate array expressions in parallel using the SPU coprocessors
of the Cell Broadband Engine. We present performance results from
four benchmarks that all demonstrate absolute speedups over equivalent ‘C’ or
Fortran versions running on the PPU host processor. A significant benefit of this
straightforward approach is that a serial implementation of any code is always
available, providing code longevity, and a familiar development paradigm
An LLVM Instrumentation Plug-in for Score-P
Reducing application runtime, scaling parallel applications to higher numbers
of processes/threads, and porting applications to new hardware architectures
are tasks necessary in the software development process. Therefore, developers
have to investigate and understand application runtime behavior. Tools such as
monitoring infrastructures that capture performance relevant data during
application execution assist in this task. The measured data forms the basis
for identifying bottlenecks and optimizing the code. Monitoring infrastructures
need mechanisms to record application activities in order to conduct
measurements. Automatic instrumentation of the source code is the preferred
method in most application scenarios. We introduce a plug-in for the LLVM
infrastructure that enables automatic source code instrumentation at
compile-time. In contrast to available instrumentation mechanisms in
LLVM/Clang, our plug-in can selectively include/exclude individual application
functions. This enables developers to fine-tune the measurement to the required
level of detail while avoiding large runtime overheads due to excessive
instrumentation.Comment: 8 page
64-bit architechtures and compute clusters for high performance simulations
Simulation of large complex systems remains one of the most demanding
of high performance computer systems both in terms of raw compute performance
and efficient memory management. Recent availability of 64-bit
architectures has opened up the possibilities of commodity computers accessing
more than the 4 Gigabyte memory limit previously enforced by 32-bit
addressing. We report on some performance measurements we have made on
two 64-bit architectures and their consequences for some high performance
simulations. We discuss performance of our codes for simulations of artificial
life models; computational physics models of point particles on lattices; and
with interacting clusters of particles. We have summarised pertinent features
of these codes into benchmark kernels which we discuss in the context of wellknown
benchmark kernels of the 32-bit era. We report on how these these
findings were useful in the context of designing 64-bit compute clusters for
high-performance simulations
Coarse-grained reconfigurable array architectures
Coarse-Grained Reconfigurable Array (CGRA) architectures accelerate the same inner loops that benefit from the high ILP support in VLIW architectures. By executing non-loop code on other cores, however, CGRAs can focus on such loops to execute them more efficiently. This chapter discusses the basic principles of CGRAs, and the wide range of design options available to a CGRA designer, covering a large number of existing CGRA designs. The impact of different options on flexibility, performance, and power-efficiency is discussed, as well as the need for compiler support. The ADRES CGRA design template is studied in more detail as a use case to illustrate the need for design space exploration, for compiler support and for the manual fine-tuning of source code
- …