4 research outputs found
Exploiting Monotone Convergence Functions in Parallel Programs
Scientific codes which use iterative methods are often difficult to
parallelize well. Such codes usually contain \code{while} loops which
iterate until they converge upon the solution. Problems arise since
the number of iterations cannot be determined at compile time, and
tests for termination usually require a global reduction and an
associated barrier. We present a method which allows us avoid
performing global barriers and exploit pipelined parallelism when
processors can detect non-convergence from local information.
(Also cross-referenced as UMIACS-TR-96-31.1
On the space-time mapping of WHILE-loops
ABSTRACT A WHILE-loop can be viewed as a FOR-loop with a dynamic up-per bound. The computational model of convex polytopes is useful for the automatic parallelization of FOR-loops. We investigate itspotential for the parallelization of WHILE-loops. 1. WHILE-loops as FOR-loops We denote a FOR-loop as follows: FOR index: = lower bound TO upper bound DO body The step size (also called stride) of a FOR-loop is +1. (A FOR-loop with a different stride can easily be transformed to one with stride +1.) If the upper bound of the FOR-loop is smaller than the lower bound, the loop defines the empty statement. A WHILE-loop is commonly denoted as follows: WHILE condition DO body One can view a WHILE-loop as a generalized FOR-loop, with a conditional upper bound that is reevaluated after every iteration: FOR new index: = 0 TO (IF condition THEN new index ELSE new index\Gamma 1) DO body Here, new index is a new index variable. The upper bound of the loop is incremented at each iteration. When the condition is found to be violated, the upper bound is reduced to cause termination. We shall use the following syntax for a WHILE-loop written as a FOR-loop