4,242 research outputs found
Contract-Based General-Purpose GPU Programming
Using GPUs as general-purpose processors has revolutionized parallel
computing by offering, for a large and growing set of algorithms, massive
data-parallelization on desktop machines. An obstacle to widespread adoption,
however, is the difficulty of programming them and the low-level control of the
hardware required to achieve good performance. This paper suggests a
programming library, SafeGPU, that aims at striking a balance between
programmer productivity and performance, by making GPU data-parallel operations
accessible from within a classical object-oriented programming language. The
solution is integrated with the design-by-contract approach, which increases
confidence in functional program correctness by embedding executable program
specifications into the program text. We show that our library leads to modular
and maintainable code that is accessible to GPGPU non-experts, while providing
performance that is comparable with hand-written CUDA code. Furthermore,
runtime contract checking turns out to be feasible, as the contracts can be
executed on the GPU
A GPU-based hyperbolic SVD algorithm
A one-sided Jacobi hyperbolic singular value decomposition (HSVD) algorithm,
using a massively parallel graphics processing unit (GPU), is developed. The
algorithm also serves as the final stage of solving a symmetric indefinite
eigenvalue problem. Numerical testing demonstrates the gains in speed and
accuracy over sequential and MPI-parallelized variants of similar Jacobi-type
HSVD algorithms. Finally, possibilities of hybrid CPU--GPU parallelism are
discussed.Comment: Accepted for publication in BIT Numerical Mathematic
Taming Numbers and Durations in the Model Checking Integrated Planning System
The Model Checking Integrated Planning System (MIPS) is a temporal least
commitment heuristic search planner based on a flexible object-oriented
workbench architecture. Its design clearly separates explicit and symbolic
directed exploration algorithms from the set of on-line and off-line computed
estimates and associated data structures. MIPS has shown distinguished
performance in the last two international planning competitions. In the last
event the description language was extended from pure propositional planning to
include numerical state variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules. As a participant of the fully automated track of the competition,
MIPS has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This article
introduces and analyzes the most important algorithmic novelties that were
necessary to tackle the new layers of expressiveness in the benchmark problems
and to achieve a high level of performance. The extensions include critical
path analysis of sequentially generated plans to generate corresponding optimal
parallel plans. The linear time algorithm to compute the parallel plan bypasses
known NP hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations. The efficiency of
this algorithm also allows us to improve the exploration guidance: for each
encountered planning state the corresponding approximate sequential plan is
scheduled. One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that infers
knowledge to minimize the state description length, and that detects domain
object symmetries. The latter aspect is analyzed in detail. MIPS has been
developed to serve as a complete and optimal state space planner, with
admissible estimates, exploration engines and branching cuts. In the
competition version, however, certain performance compromises had to be made,
including floating point arithmetic, weighted heuristic search exploration
according to an inadmissible estimate and parameterized optimization
Summary of the functions and capabilities of the structural analysis system computer program
Functions and operations of structural analysis system computer progra
Evaluating Component Assembly Specialization for 3D FFT
The Fast Fourier Transform (FFT) is a widely-used building block for many high-performance scienti c applications. Ef-
cient computing of FFT is paramount for the performance of these applications. This has led to many e orts to implement
machine and computation speci c optimizations. However, no existing FFT library is capable of easily integrating and au-
tomating the selection of new and/or unique optimizations.
To ease FFT specialization, this paper evaluates the use of component-based software engineering, a programming paradigm
which consists in building applications by assembling small software units. Component models are known to have many software
engineering bene ts but usually have insucient performance for high-performance scienti c applications.
This paper uses the L2C model, a general purpose high-performance component model, and studies its performance and
adaptation capabilities on 3D FFTs. Experiments show that L2C, and components in general, enables easy handling of 3D FFT
specializations while obtaining performance comparable to that of well-known libraries. However, a higher-level component
model is needed to automatically generate an adequate L2C assembly
Searching by approximate personal-name matching
We discuss the design, building and evaluation of a method to access theinformation of a person, using his name as a search key, even if it has deformations. We present a similarity function, the DEA function, based
on the probabilities of the edit operations accordingly to the involved
letters and their position, and using a variable threshold. The efficacy
of DEA is quantitatively evaluated, without human relevance judgments,
very superior to the efficacy of known methods. A very efficient
approximate search technique for the DEA function is also presented
based on a compacted trie-tree structure.Postprint (published version
Sparse Tensor Transpositions
We present a new algorithm for transposing sparse tensors called Quesadilla.
The algorithm converts the sparse tensor data structure to a list of
coordinates and sorts it with a fast multi-pass radix algorithm that exploits
knowledge of the requested transposition and the tensors input partial
coordinate ordering to provably minimize the number of parallel partial sorting
passes. We evaluate both a serial and a parallel implementation of Quesadilla
on a set of 19 tensors from the FROSTT collection, a set of tensors taken from
scientific and data analytic applications. We compare Quesadilla and a
generalization, Top-2-sadilla to several state of the art approaches, including
the tensor transposition routine used in the SPLATT tensor factorization
library. In serial tests, Quesadilla was the best strategy for 60% of all
tensor and transposition combinations and improved over SPLATT by at least 19%
in half of the combinations. In parallel tests, at least one of Quesadilla or
Top-2-sadilla was the best strategy for 52% of all tensor and transposition
combinations.Comment: This work will be the subject of a brief announcement at the 32nd ACM
Symposium on Parallelism in Algorithms and Architectures (SPAA '20
EFFECTIVE FUNCTION CHOICE IN THE R SCRIPTING LANGUAGE
This project examines the current available work on the explicit and implicit parallelization of the R scripting language and reports on experimental findings for the development of a model for predicting effective points for automatic parallelization to be performed, based upon input data sizes and function complexity. After finding or creating a series of custom benchmarks, an interval based on data size and time complexity where replacement becomes a viable option was found; specifically between O(N) and O(N3) exclusive. As data size increases, the benefits of parallel processing become more apparent and a point is reached where those benefits outweigh the costs in memory transfer time. Based on our observations, this point can be predicted with a fair amount of accuracy using regression on a sample of approximately ten data sizes spread evenly between a system determined minimum and maximum size
- …