848 research outputs found
Sample-Parallel Execution of EBCOT in Fast Mode
JPEG 2000’s most computationally expensive building
block is the Embedded Block Coder with Optimized Truncation
(EBCOT). This paper evaluates how encoders targeting a parallel
architecture such as a GPU can increase their throughput in use
cases where very high data rates are used. The compression
efficiency in the less significant bit-planes is then often poor and
it is beneficial to enable the Selective Arithmetic Coding Bypass
style (fast mode) in order to trade a small loss in compression
efficiency for a reduction of the computational complexity. More
importantly, this style exposes a more finely grained parallelism
that can be exploited to execute the raw coding passes, including
bit-stuffing, in a sample-parallel fashion. For a latency- or
memory critical application that encodes one frame at a time,
EBCOT’s tier-1 is sped up between 1.1x and 2.4x compared to an
optimized GPU-based implementation. When a low GPU
occupancy has already been addressed by encoding multiple
frames in parallel, the throughput can still be improved by 5%
for high-entropy images and 27% for low-entropy images. Best
results are obtained when enabling the fast mode after the fourth
significant bit-plane. For most of the test images the compression
rate is within 1% of the original
Gunrock: A High-Performance Graph Processing Library on the GPU
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs have been two
significant challenges for developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We evaluate Gunrock on five key graph
primitives and show that Gunrock has on average at least an order of magnitude
speedup over Boost and PowerGraph, comparable performance to the fastest GPU
hardwired primitives, and better performance than any other GPU high-level
graph library.Comment: 14 pages, accepted by PPoPP'16 (removed the text repetition in the
previous version v5
Accelerating Reduction and Scan Using Tensor Core Units
Driven by deep learning, there has been a surge of specialized processors for
matrix multiplication, referred to as TensorCore Units (TCUs). These TCUs are
capable of performing matrix multiplications on small matrices (usually 4x4 or
16x16) to accelerate the convolutional and recurrent neural networks in deep
learning workloads. In this paper we leverage NVIDIA's TCU to express both
reduction and scan with matrix multiplication and show the benefits -- in terms
of program simplicity, efficiency, and performance. Our algorithm exercises the
NVIDIA TCUs which would otherwise be idle, achieves 89%-98% of peak memory copy
bandwidth, and is orders of magnitude faster (up to 100x for reduction and 3x
for scan) than state-of-the-art methods for small segment sizes -- common in
machine learning and scientific applications. Our algorithm achieves this while
decreasing the power consumption by up to 22% for reduction and16%for scan.Comment: In Proceedings of the ACM International Conference on Supercomputing
(ICS '19
Getting to the Point. Index Sets and Parallelism-Preserving Autodiff for Pointful Array Programming
We present a novel programming language design that attempts to combine the
clarity and safety of high-level functional languages with the efficiency and
parallelism of low-level numerical languages. We treat arrays as
eagerly-memoized functions on typed index sets, allowing abstract function
manipulations, such as currying, to work on arrays. In contrast to composing
primitive bulk-array operations, we argue for an explicit nested indexing style
that mirrors application of functions to arguments. We also introduce a
fine-grained typed effects system which affords concise and
automatically-parallelized in-place updates. Specifically, an associative
accumulation effect allows reverse-mode automatic differentiation of in-place
updates in a way that preserves parallelism. Empirically, we benchmark against
the Futhark array programming language, and demonstrate that aggressive
inlining and type-driven compilation allows array programs to be written in an
expressive, "pointful" style with little performance penalty.Comment: 31 pages with appendix, 11 figures. A conference submission is still
under revie
- …