15,961 research outputs found

    Group-theoretic algorithms for matrix multiplication

    Get PDF
    We further develop the group-theoretic approach to fast matrix multiplication introduced by Cohn and Umans, and for the first time use it to derive algorithms asymptotically faster than the standard algorithm. We describe several families of wreath product groups that achieve matrix multiplication exponent less than 3, the asymptotically fastest of which achieves exponent 2.41. We present two conjectures regarding specific improvements, one combinatorial and the other algebraic. Either one would imply that the exponent of matrix multiplication is 2.Comment: 10 page

    Group-theoretic algorithms for matrix multiplication

    Full text link

    Group-theoretic algorithms for matrix multiplication

    Get PDF
    The exponent of matrix multiplication is the smallest real number ω such that for all ε>0, O(n^(ω+ε)) arithmetic operations suffice to multiply two n×n matrices. The standard algorithm for matrix multiplication shows that ω≤3. Strassen's remarkable result [5] shows that ω≤2.81, and a sequence of further works culminating in the work of Coppersmith and Winograd [4] have improved this upper bound to ω≤2.376 (see [1] for a full history). Most researchers believe that in fact ω=2, but there have been no further improvements in the known upper bounds for the past fifteen years. It is known that several central linear algebra problems (for example, computing determinants, solving systems of equations, inverting matrices, computing LUP decompositions) have the same exponent as matrix multiplication, which makes ω a fundamental number for understanding algorithmic linear algebra. In addition, there are non-algebraic algorithms whose complexity is expressed in terms of ω. In this talk I will describe a new "group-theoretic" approach, proposed in [3], to devising algorithms for fast matrix multiplication. The basic idea is to reduce matrix multiplication to group algebra multiplication with respect to a suitable non-abelian group. The group algebra multiplication is performed in the Fourier domain, and then using this scheme recursively yields upper bounds on ω. This general framework produces nontrivial matrix multiplication algorithms if one can construct finite groups with certain properties. In particular, a very natural embedding of matrix multiplication into C[G]-multiplication is possible when group G has three subgroups H1, H2, H3 that satisfy the triple product property. I'll define this property and describe a construction that satisfies the triple product property with parameters that are necessary (but not yet sufficient) to achieve ω=2. In the next part of the talk I'll describe demands on the representation theory of the groups in order for the overall approach to yield non-trivial bounds on ω, namely, that the character degrees must be "small." Constructing families of groups together with subgroups satisfying the triple product property and for which the character degrees are sufficiently small has turned out to be quite challenging. In [2], we succeed in constructing groups meeting both requirements, resulting in non-trivial algorithms for matrix multiplication in this framework. I'll outline the basic construction, together with more sophisticated variants that achieve the bounds ω<2.48 and ω<2.41. In the final part of the talk I'll present two appealing conjectures, one combinatorial and the other algebraic. Either one would imply that the exponent of matrix multiplication is 2

    Search and test algorithms for Triple Product Property triples

    Full text link
    In 2003 COHN and UMANS introduced a group-theoretic approach to fast matrix multiplication. This involves finding large subsets of a group GG satisfying the Triple Product Property (TPP) as a means to bound the exponent ω\omega of matrix multiplication. We present two new characterizations of the TPP, which are useful for theoretical considerations and for TPP test algorithms. With this we describe all known TPP tests and implement them in GAP algorithms. We also compare their runtime. Furthermore we show that the search for subgroup TPP triples of nontrivial size in a nonabelian group can be restricted to the set of all nonnormal subgroups of that group. Finally we describe brute-force search algorithms for maximal subgroup and subset TPP triples. In addition we present the results of the subset brute-force search for all groups of order less than 25 and selected results of the subgroup brute-force search for 2-groups, SL(n,q)SL(n,q) and PSL(2,q)PSL(2,q).Comment: 14 pages, 2 figures, 4 tables; ISSN (Online) 1869-6104, ISSN (Print) 1867-114

    Uniquely Solvable Puzzles and Fast Matrix Multiplication

    Get PDF
    In 2003 Cohn and Umans introduced a new group-theoretic framework for doing fast matrix multiplications, with several conjectures that would imply the matrix multiplication exponent ω\omega is 2. Their methods have been used to match one of the fastest known algorithms by Coppersmith and Winograd, which runs in O(n2.376)O(n^{2.376}) time and implies that ω≤2.376\omega \leq 2.376. This thesis discusses the framework that Cohn and Umans came up with and presents some new results in constructing combinatorial objects called uniquely solvable puzzles that were introduced in a 2005 follow-up paper, and which play a crucial role in one of the ω=2\omega = 2 conjectures

    Faster Algorithms for Rectangular Matrix Multiplication

    Full text link
    Let {\alpha} be the maximal value such that the product of an n x n^{\alpha} matrix by an n^{\alpha} x n matrix can be computed with n^{2+o(1)} arithmetic operations. In this paper we show that \alpha>0.30298, which improves the previous record \alpha>0.29462 by Coppersmith (Journal of Complexity, 1997). More generally, we construct a new algorithm for multiplying an n x n^k matrix by an n^k x n matrix, for any value k\neq 1. The complexity of this algorithm is better than all known algorithms for rectangular matrix multiplication. In the case of square matrix multiplication (i.e., for k=1), we recover exactly the complexity of the algorithm by Coppersmith and Winograd (Journal of Symbolic Computation, 1990). These new upper bounds can be used to improve the time complexity of several known algorithms that rely on rectangular matrix multiplication. For example, we directly obtain a O(n^{2.5302})-time algorithm for the all-pairs shortest paths problem over directed graphs with small integer weights, improving over the O(n^{2.575})-time algorithm by Zwick (JACM 2002), and also improve the time complexity of sparse square matrix multiplication.Comment: 37 pages; v2: some additions in the acknowledgment
    • …
    corecore