11,922 research outputs found
OpenCL Actors - Adding Data Parallelism to Actor-based Programming with CAF
The actor model of computation has been designed for a seamless support of
concurrency and distribution. However, it remains unspecific about data
parallel program flows, while available processing power of modern many core
hardware such as graphics processing units (GPUs) or coprocessors increases the
relevance of data parallelism for general-purpose computation.
In this work, we introduce OpenCL-enabled actors to the C++ Actor Framework
(CAF). This offers a high level interface for accessing any OpenCL device
without leaving the actor paradigm. The new type of actor is integrated into
the runtime environment of CAF and gives rise to transparent message passing in
distributed systems on heterogeneous hardware. Following the actor logic in
CAF, OpenCL kernels can be composed while encapsulated in C++ actors, hence
operate in a multi-stage fashion on data resident at the GPU. Developers are
thus enabled to build complex data parallel programs from primitives without
leaving the actor paradigm, nor sacrificing performance. Our evaluations on
commodity GPUs, an Nvidia TESLA, and an Intel PHI reveal the expected linear
scaling behavior when offloading larger workloads. For sub-second duties, the
efficiency of offloading was found to largely differ between devices. Moreover,
our findings indicate a negligible overhead over programming with the native
OpenCL API.Comment: 28 page
Survey on Combinatorial Register Allocation and Instruction Scheduling
Register allocation (mapping variables to processor registers or memory) and
instruction scheduling (reordering instructions to increase instruction-level
parallelism) are essential tasks for generating efficient assembly code in a
compiler. In the last three decades, combinatorial optimization has emerged as
an alternative to traditional, heuristic algorithms for these two tasks.
Combinatorial optimization approaches can deliver optimal solutions according
to a model, can precisely capture trade-offs between conflicting decisions, and
are more flexible at the expense of increased compilation time.
This paper provides an exhaustive literature review and a classification of
combinatorial optimization approaches to register allocation and instruction
scheduling, with a focus on the techniques that are most applied in this
context: integer programming, constraint programming, partitioned Boolean
quadratic programming, and enumeration. Researchers in compilers and
combinatorial optimization can benefit from identifying developments, trends,
and challenges in the area; compiler practitioners may discern opportunities
and grasp the potential benefit of applying combinatorial optimization
Recommended from our members
Silicon compilation
Silicon compilation is a term used for many different purposes. In this paper we define silicon compilation as a mapping from some higher level description into layout. We define the basic issues in structural and behavioral silicon compilation and some possible solutions to those issues. Finally, we define the concept of an intelligent silicon compiler in which the compiler evaluates the quality of the generated design and attempts to improve it if it is not satisfactory
On Longest Repeat Queries Using GPU
Repeat finding in strings has important applications in subfields such as
computational biology. The challenge of finding the longest repeats covering
particular string positions was recently proposed and solved by \.{I}leri et
al., using a total of the optimal time and space, where is the
string size. However, their solution can only find the \emph{leftmost} longest
repeat for each of the string position. It is also not known how to
parallelize their solution. In this paper, we propose a new solution for
longest repeat finding, which although is theoretically suboptimal in time but
is conceptually simpler and works faster and uses less memory space in practice
than the optimal solution. Further, our solution can find \emph{all} longest
repeats of every string position, while still maintaining a faster processing
speed and less memory space usage. Moreover, our solution is
\emph{parallelizable} in the shared memory architecture (SMA), enabling it to
take advantage of the modern multi-processor computing platforms such as the
general-purpose graphics processing units (GPU). We have implemented both the
sequential and parallel versions of our solution. Experiments with both
biological and non-biological data show that our sequential and parallel
solutions are faster than the optimal solution by a factor of 2--3.5 and 6--14,
respectively, and use less memory space.Comment: 14 page
SAFE FOOD AND WATER: PRODUCERS LOOK AT RISK
Environmental Economics and Policy,
Where should livestock graze? Integrated modeling and optimization to guide grazing management in the Cañete basin, Peru
Integrated watershed management allows decision-makers to balance competing objectives, for example agricultural production and protection of water resources. Here, we developed a spatially-explicit approach to support such management in the Cañete watershed, Peru. We modeled the effect of grazing management on three services – livestock production, erosion control, and baseflow provision – and used an optimization routine to simulate landscapes providing the highest level of services. Over the entire watershed, there was a trade-off between livestock productivity and hydrologic services and we identified locations that minimized this trade-off for a given set of preferences. Given the knowledge gaps in ecohydrology and practical constraints not represented in the optimizer, we assessed the robustness of spatial recommendations, i.e. revealing areas most often selected by the optimizer. We conclude with a discussion of the practical decisions involved in using optimization frameworks to inform watershed management programs, and the research needs to better inform the design of such programs
Gunrock: GPU Graph Analytics
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs, have presented two
significant challenges to developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We characterize the performance of
various optimization strategies and evaluate Gunrock's overall performance on
different GPU architectures on a wide range of graph primitives that span from
traversal-based algorithms and ranking algorithms, to triangle counting and
bipartite-graph-based algorithms. The results show that on a single GPU,
Gunrock has on average at least an order of magnitude speedup over Boost and
PowerGraph, comparable performance to the fastest GPU hardwired primitives and
CPU shared-memory graph libraries such as Ligra and Galois, and better
performance than any other GPU high-level graph library.Comment: 52 pages, invited paper to ACM Transactions on Parallel Computing
(TOPC), an extended version of PPoPP'16 paper "Gunrock: A High-Performance
Graph Processing Library on the GPU
- …