57 research outputs found
Generic design of Chinese remaindering schemes
We propose a generic design for Chinese remainder algorithms. A Chinese
remainder computation consists in reconstructing an integer value from its
residues modulo non coprime integers. We also propose an efficient linear data
structure, a radix ladder, for the intermediate storage and computations. Our
design is structured into three main modules: a black box residue computation
in charge of computing each residue; a Chinese remaindering controller in
charge of launching the computation and of the termination decision; an integer
builder in charge of the reconstruction computation. We then show that this
design enables many different forms of Chinese remaindering (e.g.
deterministic, early terminated, distributed, etc.), easy comparisons between
these forms and e.g. user-transparent parallelism at different parallel grains
Property-Driven Fence Insertion using Reorder Bounded Model Checking
Modern architectures provide weaker memory consistency guarantees than
sequential consistency. These weaker guarantees allow programs to exhibit
behaviours where the program statements appear to have executed out of program
order. Fortunately, modern architectures provide memory barriers (fences) to
enforce the program order between a pair of statements if needed. Due to the
intricate semantics of weak memory models, the placement of fences is
challenging even for experienced programmers. Too few fences lead to bugs
whereas overuse of fences results in performance degradation. This motivates
automated placement of fences. Tools that restore sequential consistency in the
program may insert more fences than necessary for the program to be correct.
Therefore, we propose a property-driven technique that introduces
"reorder-bounded exploration" to identify the smallest number of program
locations for fence placement. We implemented our technique on top of CBMC;
however, in principle, our technique is generic enough to be used with any
model checker. Our experimental results show that our technique is faster and
solves more instances of relevant benchmarks as compared to earlier approaches.Comment: 18 pages, 3 figures, 4 algorithms. Version change reason : new set of
results and publication ready version of FM 201
The best multicore-parallelization refactoring you've never heard of
In this short paper, we explore a new way to refactor a simple but
tricky-to-parallelize tree-traversal algorithm to harness multicore
parallelism. Crucially, the refactoring draws from some classic techniques from
programming-languages research, such as the continuation-passing-style
transform and defunctionalization. The algorithm we consider faces a
particularly acute granularity-control challenge, owing to the wide range of
inputs it has to deal with. Our solution achieves efficiency from heartbeat
scheduling, a recent approach to automatic granularity control. We present our
solution in a series of individually simple refactoring steps, starting from a
high-level, recursive specification of the algorithm. As such, our approach may
prove useful as a teaching tool, and perhaps be used for one-off
parallelizations, as the technique requires no special compiler support
A wide-spectrum language for verification of programs on weak memory models
Modern processors deploy a variety of weak memory models, which for
efficiency reasons may (appear to) execute instructions in an order different
to that specified by the program text. The consequences of instruction
reordering can be complex and subtle, and can impact on ensuring correctness.
Previous work on the semantics of weak memory models has focussed on the
behaviour of assembler-level programs. In this paper we utilise that work to
extract some general principles underlying instruction reordering, and apply
those principles to a wide-spectrum language encompassing abstract data types
as well as low-level assembler code. The goal is to support reasoning about
implementations of data structures for modern processors with respect to an
abstract specification.
Specifically, we define an operational semantics, from which we derive some
properties of program refinement, and encode the semantics in the rewriting
engine Maude as a model-checking tool. The tool is used to validate the
semantics against the behaviour of a set of litmus tests (small assembler
programs) run on hardware, and also to model check implementations of data
structures from the literature against their abstract specifications
Shared Memory Parallel Subgraph Enumeration
The subgraph enumeration problem asks us to find all subgraphs of a target
graph that are isomorphic to a given pattern graph. Determining whether even
one such isomorphic subgraph exists is NP-complete---and therefore finding all
such subgraphs (if they exist) is a time-consuming task. Subgraph enumeration
has applications in many fields, including biochemistry and social networks,
and interestingly the fastest algorithms for solving the problem for
biochemical inputs are sequential. Since they depend on depth-first tree
traversal, an efficient parallelization is far from trivial. Nevertheless,
since important applications produce data sets with increasing difficulty,
parallelism seems beneficial.
We thus present here a shared-memory parallelization of the state-of-the-art
subgraph enumeration algorithms RI and RI-DS (a variant of RI for dense graphs)
by Bonnici et al. [BMC Bioinformatics, 2013]. Our strategy uses work stealing
and our implementation demonstrates a significant speedup on real-world
biochemical data---despite a highly irregular data access pattern. We also
improve RI-DS by pruning the search space better; this further improves the
empirical running times compared to the already highly tuned RI-DS.Comment: 18 pages, 12 figures, To appear at the 7th IEEE Workshop on Parallel
/ Distributed Computing and Optimization (PDCO 2017
Lace: non-blocking split deque for work-stealing
Work-stealing is an efficient method to implement load balancing in fine-grained task parallelism. Typically, concurrent deques are used for this purpose. A disadvantage of many concurrent deques is that they require expensive memory fences for local deque operations.\ud
\ud
In this paper, we propose a new non-blocking work-stealing deque based on the split task queue. Our design uses a dynamic split point between the shared and the private portions of the deque, and only requires memory fences when shrinking the shared portion.\ud
\ud
We present Lace, an implementation of work-stealing based on this deque, with an interface similar to the work-stealing library Wool, and an evaluation of Lace based on several common benchmarks. We also implement a recent approach using private deques in Lace. We show that the split deque and the private deque in Lace have similar low overhead and high scalability as Wool
Well-Structured Futures and Cache Locality
In fork-join parallelism, a sequential program is split into a directed
acyclic graph of tasks linked by directed dependency edges, and the tasks are
executed, possibly in parallel, in an order consistent with their dependencies.
A popular and effective way to extend fork-join parallelism is to allow threads
to create futures. A thread creates a future to hold the results of a
computation, which may or may not be executed in parallel. That result is
returned when some thread touches that future, blocking if necessary until the
result is ready.
Recent research has shown that while futures can, of course, enhance
parallelism in a structured way, they can have a deleterious effect on cache
locality. In the worst case, futures can incur deviations, which implies
additional cache misses, where is the number of cache lines, is the
number of processors, is the number of touches, and is the
\emph{computation span}. Since cache locality has a large impact on software
performance on modern multicores, this result is troubling.
In this paper, however, we show that if futures are used in a simple,
disciplined way, then the situation is much better: if each future is touched
only once, either by the thread that created it, or by a thread to which the
future has been passed from the thread that created it, then parallel
executions with work stealing can incur at most additional
cache misses, a substantial improvement. This structured use of futures is
characteristic of many (but not all) parallel applications
Defining correctness conditions for concurrent objects in multicore architectures
Correctness of concurrent objects is defined in terms of conditions that determine allowable relationships between histories of a concurrent object and those of the corresponding sequential object. Numerous correctness conditions have been proposed over the years, and more have been proposed recently as the algorithms implementing concurrent objects have been adapted to cope with multicore processors with relaxed memory architectures. We present a formal framework for defining correctness conditions for multicore architectures, covering both standard conditions for totally ordered memory and newer conditions for relaxed
memory, which allows them to be expressed in uniform manner, simplifying comparison. Our framework distinguishes between order and commitment properties, which in turn enables a hierarchy of correctness conditions to be established. We consider the Total Store Order (TSO) memory model in detail, formalise known conditions for TSO using our framework, and develop sequentially consistent variations of these. We present a work-stealing deque for TSO memory that is not linearizable, but is correct with respect to these new conditions. Using our framework, we identify a new non-blocking compositional condition, fence consistency, which lies between known conditions for TSO, and aims to capture the intention of a programmer-specified fence
- …