44 research outputs found
The preprocessed doacross loop
Dependencies between loop iterations cannot always be characterized during program compilation. Doacross loops typically make use of a-priori knowledge of inter-iteration dependencies to carry out required synchronizations. A type of doacross loop is proposed that allows the scheduling of iterations of a loop among processors without advance knowledge of inter-iteration dependencies. The method proposed for loop iterations requires that parallelizable preprocessing and postprocessing steps be carried out during program execution
Run-time parallelization and scheduling of loops
The class of problems that can be effectively compiled by parallelizing compilers is discussed. This is accomplished with the doconsider construct which would allow these compilers to parallelize many problems in which substantial loop-level parallelism is available but cannot be detected by standard compile-time analysis. We describe and experimentally analyze mechanisms used to parallelize the work required for these types of loops. In each of these methods, a new loop structure is produced by modifying the loop to be parallelized. We also present the rules by which these loop transformations may be automated in order that they be included in language compilers. The main application area of the research involves problems in scientific computations and engineering. The workload used in our experiment includes a mixture of real problems as well as synthetically generated inputs. From our extensive tests on the Encore Multimax/320, we have reached the conclusion that for the types of workloads we have investigated, self-execution almost always performs better than pre-scheduling. Further, the improvement in performance that accrues as a result of global topological sorting of indices as opposed to the less expensive local sorting, is not very significant in the case of self-execution
Run-time parallelization and scheduling of loops
Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure
Run-time scheduling and execution of loops on message passing machines
Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent
Krylov methods preconditioned with incompletely factored matrices on the CM-2
The performance is measured of the components of the key interative kernel of a preconditioned Krylov space interative linear system solver. In some sense, these numbers can be regarded as best case timings for these kernels. Sweeps were timed over meshes, sparse triangular solves, and inner products on a large 3-D model problem over a cube shaped domain discretized with a seven point template. The performance of the CM-2 is highly dependent on the use of very specialized programs. These programs mapped a regular problem domain onto the processor topology in a careful manner and used the optimized local NEWS communications network. The rather dramatic deterioration in performance was documented when these ideal conditions no longer apply. A synthetic workload generator was developed to produce and solve a parameterized family of increasingly irregular problems
A scheme for supporting automatic data migration on multicomputers
A data migration mechanism is proposed that allows an explicit and controlled mapping of data to memory. While read or write copies of each data element can be assigned to any processor's memory, longer term storage of each data element is assigned to a specific location in the memory of a particular processor. Data is presented that suggests that the scheme may be a practical method for efficiently supporting data migration
Analyzing and enhancing OSKI for sparse matrix-vector multiplication
Sparse matrix-vector multiplication (SpMxV) is a kernel operation widely used
in iterative linear solvers. The same sparse matrix is multiplied by a dense
vector repeatedly in these solvers. Matrices with irregular sparsity patterns
make it difficult to utilize cache locality effectively in SpMxV computations.
In this work, we investigate single- and multiple-SpMxV frameworks for
exploiting cache locality in SpMxV computations. For the single-SpMxV
framework, we propose two cache-size-aware top-down row/column-reordering
methods based on 1D and 2D sparse matrix partitioning by utilizing the
column-net and enhancing the row-column-net hypergraph models of sparse
matrices. The multiple-SpMxV framework depends on splitting a given matrix into
a sum of multiple nonzero-disjoint matrices so that the SpMxV operation is
performed as a sequence of multiple input- and output-dependent SpMxV
operations. For an effective matrix splitting required in this framework, we
propose a cache-size-aware top-down approach based on 2D sparse matrix
partitioning by utilizing the row-column-net hypergraph model. The primary
objective in all of the three methods is to maximize the exploitation of
temporal locality. We evaluate the validity of our models and methods on a wide
range of sparse matrices by performing actual runs through using OSKI.
Experimental results show that proposed methods and models outperform
state-of-the-art schemes.Comment: arXiv admin note: substantial text overlap with arXiv:1202.385