1 research outputs found

    Parallelizing the Sparse Matrix Transposition: Reducing the Programmer Effort Using Transactional Memory

    Get PDF
    AbstractThis work discusses the parallelization of an irregular scientific code, the transposition of a sparse matrix, comparing two multithreaded strategies on a multicore platform: a programmer-optimized parallelization and a semi-automatic parallelization using transactional memory (TM) support. Sparse matrix transposition features an irregular memory access pattern that de- pends on the input matrix, and thereby its dependencies cannot be known before its execution. This situation demands from the parallel programmer an important effort to develop an optimized parallel version of the code. The aim of this paper is to show how TM may help to simplify greatly the work of the programmer in parallelizing the code while obtaining a competitive parallel version in terms of performance. To this end, a TM solution intended to exploit concurrency from sequential programs has been developed by adding a fully distributed transaction commit manager to a well-known STM system. This manager is in charge of ordering transaction commits when required in order to preserve data dependencies
    corecore