11,887 research outputs found
A compiler approach to scalable concurrent program design
The programmer's most powerful tool for controlling complexity in program design is abstraction. We seek to use abstraction in the design of concurrent programs, so as to
separate design decisions concerned with decomposition, communication, synchronization, mapping, granularity, and load balancing. This paper describes programming and compiler techniques intended to facilitate this design strategy. The programming techniques are based on a core programming notation with two important properties: the ability to separate concurrent programming concerns, and extensibility with reusable programmer-defined
abstractions. The compiler techniques are based on a simple transformation system together with a set of compilation transformations and portable run-time support. The
transformation system allows programmer-defined abstractions to be defined as source-to-source transformations that convert abstractions into the core notation. The same
transformation system is used to apply compilation transformations that incrementally transform the core notation toward an abstract concurrent machine. This machine can be implemented on a variety of concurrent architectures using simple run-time support.
The transformation, compilation, and run-time system techniques have been implemented and are incorporated in a public-domain program development toolkit. This
toolkit operates on a wide variety of networked workstations, multicomputers, and shared-memory
multiprocessors. It includes a program transformer, concurrent compiler, syntax checker, debugger, performance analyzer, and execution animator. A variety of substantial
applications have been developed using the toolkit, in areas such as climate modeling and fluid dynamics
pocl: A Performance-Portable OpenCL Implementation
OpenCL is a standard for parallel programming of heterogeneous systems. The
benefits of a common programming standard are clear; multiple vendors can
provide support for application descriptions written according to the standard,
thus reducing the program porting effort. While the standard brings the obvious
benefits of platform portability, the performance portability aspects are
largely left to the programmer. The situation is made worse due to multiple
proprietary vendor implementations with different characteristics, and, thus,
required optimization strategies.
In this paper, we propose an OpenCL implementation that is both portable and
performance portable. At its core is a kernel compiler that can be used to
exploit the data parallelism of OpenCL programs on multiple platforms with
different parallel hardware styles. The kernel compiler is modularized to
perform target-independent parallel region formation separately from the
target-specific parallel mapping of the regions to enable support for various
styles of fine-grained parallel resources such as subword SIMD extensions, SIMD
datapaths and static multi-issue. Unlike previous similar techniques that work
on the source level, the parallel region formation retains the information of
the data parallelism using the LLVM IR and its metadata infrastructure. This
data can be exploited by the later generic compiler passes for efficient
parallelization.
The proposed open source implementation of OpenCL is also platform portable,
enabling OpenCL on a wide range of architectures, both already commercialized
and on those that are still under research. The paper describes how the
portability of the implementation is achieved. Our results show that most of
the benchmarked applications when compiled using pocl were faster or close to
as fast as the best proprietary OpenCL implementation for the platform at hand.Comment: This article was published in 2015; it is now openly accessible via
arxi
On the Implementation of GNU Prolog
GNU Prolog is a general-purpose implementation of the Prolog language, which
distinguishes itself from most other systems by being, above all else, a
native-code compiler which produces standalone executables which don't rely on
any byte-code emulator or meta-interpreter. Other aspects which stand out
include the explicit organization of the Prolog system as a multipass compiler,
where intermediate representations are materialized, in Unix compiler
tradition. GNU Prolog also includes an extensible and high-performance finite
domain constraint solver, integrated with the Prolog language but implemented
using independent lower-level mechanisms. This article discusses the main
issues involved in designing and implementing GNU Prolog: requirements, system
organization, performance and portability issues as well as its position with
respect to other Prolog system implementations and the ISO standardization
initiative.Comment: 30 pages, 3 figures, To appear in Theory and Practice of Logic
Programming (TPLP); Keywords: Prolog, logic programming system, GNU, ISO,
WAM, native code compilation, Finite Domain constraint
MP-STREAM: A Memory Performance Benchmark for Design Space Exploration on Heterogeneous HPC Devices
Sustained memory throughput is a key determinant
of performance in HPC devices. Having an accurate estimate of
this parameter is essential for manual or automated design space
exploration for any HPC device. While there are benchmarks for
measuring the sustained memory bandwidth for CPUs and GPUs,
such a benchmark for FPGAs has been missing. We present
MP-STREAM, an OpenCL-based synthetic micro-benchmark for
measuring sustained memory bandwidth, optimized for FPGAs,
but which can be used on multiple platforms. Our main contribution
is the introduction of various generic as well as device-specific
parameters that can be tuned to measure their effect on memory
bandwidth. We present results of running our benchmark on a
CPU, a GPU and two FPGA targets, and discuss our observations.
The experiments underline the utility of our benchmark for
optimizing HPC applications for FPGAs, and provide valuable
optimization hints for FPGA programmers
PlinyCompute: A Platform for High-Performance, Distributed, Data-Intensive Tool Development
This paper describes PlinyCompute, a system for development of
high-performance, data-intensive, distributed computing tools and libraries. In
the large, PlinyCompute presents the programmer with a very high-level,
declarative interface, relying on automatic, relational-database style
optimization to figure out how to stage distributed computations. However, in
the small, PlinyCompute presents the capable systems programmer with a
persistent object data model and API (the "PC object model") and associated
memory management system that has been designed from the ground-up for high
performance, distributed, data-intensive computing. This contrasts with most
other Big Data systems, which are constructed on top of the Java Virtual
Machine (JVM), and hence must at least partially cede performance-critical
concerns such as memory management (including layout and de/allocation) and
virtual method/function dispatch to the JVM. This hybrid approach---declarative
in the large, trusting the programmer's ability to utilize PC object model
efficiently in the small---results in a system that is ideal for the
development of reusable, data-intensive tools and libraries. Through extensive
benchmarking, we show that implementing complex objects manipulation and
non-trivial, library-style computations on top of PlinyCompute can result in a
speedup of 2x to more than 50x or more compared to equivalent implementations
on Spark.Comment: 48 pages, including references and Appendi
- …