60 research outputs found

    Group-theoretic algorithms for matrix multiplication

    Get PDF
    We further develop the group-theoretic approach to fast matrix multiplication introduced by Cohn and Umans, and for the first time use it to derive algorithms asymptotically faster than the standard algorithm. We describe several families of wreath product groups that achieve matrix multiplication exponent less than 3, the asymptotically fastest of which achieves exponent 2.41. We present two conjectures regarding specific improvements, one combinatorial and the other algebraic. Either one would imply that the exponent of matrix multiplication is 2.Comment: 10 page

    Group-theoretic algorithms for matrix multiplication

    Full text link

    Group-theoretic algorithms for matrix multiplication

    Get PDF
    The exponent of matrix multiplication is the smallest real number ω such that for all ε>0, O(n^(ω+ε)) arithmetic operations suffice to multiply two n×n matrices. The standard algorithm for matrix multiplication shows that ω≤3. Strassen's remarkable result [5] shows that ω≤2.81, and a sequence of further works culminating in the work of Coppersmith and Winograd [4] have improved this upper bound to ω≤2.376 (see [1] for a full history). Most researchers believe that in fact ω=2, but there have been no further improvements in the known upper bounds for the past fifteen years. It is known that several central linear algebra problems (for example, computing determinants, solving systems of equations, inverting matrices, computing LUP decompositions) have the same exponent as matrix multiplication, which makes ω a fundamental number for understanding algorithmic linear algebra. In addition, there are non-algebraic algorithms whose complexity is expressed in terms of ω. In this talk I will describe a new "group-theoretic" approach, proposed in [3], to devising algorithms for fast matrix multiplication. The basic idea is to reduce matrix multiplication to group algebra multiplication with respect to a suitable non-abelian group. The group algebra multiplication is performed in the Fourier domain, and then using this scheme recursively yields upper bounds on ω. This general framework produces nontrivial matrix multiplication algorithms if one can construct finite groups with certain properties. In particular, a very natural embedding of matrix multiplication into C[G]-multiplication is possible when group G has three subgroups H1, H2, H3 that satisfy the triple product property. I'll define this property and describe a construction that satisfies the triple product property with parameters that are necessary (but not yet sufficient) to achieve ω=2. In the next part of the talk I'll describe demands on the representation theory of the groups in order for the overall approach to yield non-trivial bounds on ω, namely, that the character degrees must be "small." Constructing families of groups together with subgroups satisfying the triple product property and for which the character degrees are sufficiently small has turned out to be quite challenging. In [2], we succeed in constructing groups meeting both requirements, resulting in non-trivial algorithms for matrix multiplication in this framework. I'll outline the basic construction, together with more sophisticated variants that achieve the bounds ω<2.48 and ω<2.41. In the final part of the talk I'll present two appealing conjectures, one combinatorial and the other algebraic. Either one would imply that the exponent of matrix multiplication is 2

    Improved Lower Bounds for Testing Triangle-freeness in Boolean Functions via Fast Matrix Multiplication

    Get PDF
    Understanding the query complexity for testing linear-invariant properties has been a central open problem in the study of algebraic property testing. Triangle-freeness in Boolean functions is a simple property whose testing complexity is unknown. Three Boolean functions f1f_1, f2f_2 and f3:F2k→{0,1}f_3: \mathbb{F}_2^k \to \{0, 1\} are said to be triangle free if there is no x,y∈F2kx, y \in \mathbb{F}_2^k such that f1(x)=f2(y)=f3(x+y)=1f_1(x) = f_2(y) = f_3(x + y) = 1. This property is known to be strongly testable (Green 2005), but the number of queries needed is upper-bounded only by a tower of twos whose height is polynomial in 1 / \epsislon, where \epsislon is the distance between the tested function triple and triangle-freeness, i.e., the minimum fraction of function values that need to be modified to make the triple triangle free. A lower bound of (1/ϵ)2.423(1 / \epsilon)^{2.423} for any one-sided tester was given by Bhattacharyya and Xie (2010). In this work we improve this bound to (1/ϵ)6.619(1 / \epsilon)^{6.619}. Interestingly, we prove this by way of a combinatorial construction called \emph{uniquely solvable puzzles} that was at the heart of Coppersmith and Winograd's renowned matrix multiplication algorithm
    • …
    corecore