1 research outputs found

    Topology-induced Enhancement of Mappings

    Full text link
    In this paper we propose a new method to enhance a mapping ΞΌ(β‹…)\mu(\cdot) of a parallel application's computational tasks to the processing elements (PEs) of a parallel computer. The idea behind our method \mswap is to enhance such a mapping by drawing on the observation that many topologies take the form of a partial cube. This class of graphs includes all rectangular and cubic meshes, any such torus with even extensions in each dimension, all hypercubes, and all trees. Following previous work, we represent the parallel application and the parallel computer by graphs Ga=(Va,Ea)G_a = (V_a, E_a) and Gp=(Vp,Ep)G_p = (V_p, E_p). GpG_p being a partial cube allows us to label its vertices, the PEs, by bitvectors such that the cost of exchanging one unit of information between two vertices upu_p and vpv_p of GpG_p amounts to the Hamming distance between the labels of upu_p and vpv_p. By transferring these bitvectors from VpV_p to VaV_a via ΞΌβˆ’1(β‹…)\mu^{-1}(\cdot) and extending them to be unique on VaV_a, we can enhance ΞΌ(β‹…)\mu(\cdot) by swapping labels of VaV_a in a new way. Pairs of swapped labels are local \wrt the PEs, but not \wrt GaG_a. Moreover, permutations of the bitvectors' entries give rise to a plethora of hierarchies on the PEs. Through these hierarchies we turn \mswap into a hierarchical method for improving ΞΌ(β‹…)\mu(\cdot) that is complementary to state-of-the-art methods for computing ΞΌ(β‹…)\mu(\cdot) in the first place. In our experiments we use \mswap to enhance mappings of complex networks onto rectangular meshes and tori with 256 and 512 nodes, as well as hypercubes with 256 nodes. It turns out that common quality measures of mappings derived from state-of-the-art algorithms can be improved considerably
    corecore