1 research outputs found

    Using Symmetry to Schedule Classical Matrix Multiplication

    Full text link
    Presented with a new machine with a specific interconnect topology, algorithm designers use intuition about the symmetry of the algorithm to design time and communication-efficient schedules that map the algorithm to the machine. Is there a systematic procedure for designing schedules? We present a new technique to design schedules for algorithms with no non-trivial dependencies, focusing on the classical matrix multiplication algorithm. We model the symmetry of algorithm with the set of instructions XX as the action of the group formed by the compositions of bijections from the set XX to itself. We model the machine as the action of the group N×ΔN\times \Delta, where NN and Δ\Delta represent the interconnect topology and time increments respectively, on the set P×TP\times T of processors iterated over time steps. We model schedules as symmetry-preserving equivariant maps between the set XX and a subgroup of its symmetry and the set P×TP\times T with the symmetry N×ΔN\times\Delta. Such equivariant maps are the solutions of a set of algebraic equations involving group homomorphisms. We associate time and communication costs with the solutions to these equations. We solve these equations for the classical matrix multiplication algorithm and show that equivariant maps correspond to time- and communication-efficient schedules for many topologies. We recover well known variants including the Cannon's algorithm and the communication-avoiding "2.5D" algorithm for toroidal interconnects, systolic computation for planar hexagonal VLSI arrays, recursive algorithms for fat-trees, the cache-oblivious algorithm for the ideal cache model, and the space-bounded schedule for the parallel memory hierarchy model. This suggests that the design of a schedule for a new class of machines can be motivated by solutions to algebraic equations
    corecore