53,410 research outputs found
On the complexity of integer matrix multiplication
Let M(n) denote the bit complexity of multiplying n-bit integers, let ω ∈ (2, 3] be an exponent for matrix multiplication, and let lg* n be the iterated logarithm. Assuming that log d = O(n) and that M(n) / (n log n) is increasing, we prove that d × d matrices with n-bit integer entries may be multiplied in O(d^2 M(n) + d^ω n 2^O(lg* n − lg* d) M(lg d) / lg d) bit operations. In particular, if n is large compared to d, say d = O(log n), then the complexity is only O(d^2 M(n))
Faster Algorithms for Rectangular Matrix Multiplication
Let {\alpha} be the maximal value such that the product of an n x n^{\alpha}
matrix by an n^{\alpha} x n matrix can be computed with n^{2+o(1)} arithmetic
operations. In this paper we show that \alpha>0.30298, which improves the
previous record \alpha>0.29462 by Coppersmith (Journal of Complexity, 1997).
More generally, we construct a new algorithm for multiplying an n x n^k matrix
by an n^k x n matrix, for any value k\neq 1. The complexity of this algorithm
is better than all known algorithms for rectangular matrix multiplication. In
the case of square matrix multiplication (i.e., for k=1), we recover exactly
the complexity of the algorithm by Coppersmith and Winograd (Journal of
Symbolic Computation, 1990).
These new upper bounds can be used to improve the time complexity of several
known algorithms that rely on rectangular matrix multiplication. For example,
we directly obtain a O(n^{2.5302})-time algorithm for the all-pairs shortest
paths problem over directed graphs with small integer weights, improving over
the O(n^{2.575})-time algorithm by Zwick (JACM 2002), and also improve the time
complexity of sparse square matrix multiplication.Comment: 37 pages; v2: some additions in the acknowledgment
Efficient Algorithms for Graph-Theoretic and Geometric Problems
This thesis studies several different algorithmic problems in graph theory and in geometry. The applications of the problems studied range from circuit design optimization to fast matrix multiplication. First, we study a graph-theoretical model of the so called ''firefighter problem''. The objective is to save as much as possible of an area by appropriately placing firefighters. We provide both new exact algorithms for the case of general graphs as well as approximation algorithms for the case of planar graphs. Next, we study drawing graphs within a given polygon in the plane. We present asymptotically tight upper and lower bounds for this problem Further, we study the problem of Subgraph Isormorphism, which amounts to decide if an input graph (pattern) is isomorphic to a subgraph of another input graph (host graph). We show several new bounds on the time complexity of detecting small pattern graphs. Among other things, we provide a new framework for detection by testing polynomials for non-identity with zero. Finally, we study the problem of partitioning a 3D histogram into a minimum number of 3D boxes and it's applications to efficient computation of matrix products for positive integer matrices. We provide an efficient approximation algorithm for the partitioning problem and several algorithms for integer matrix multiplication. The multiplication algorithms are explicitly or implicitly based on an interpretation of positive integer matrices as 3D histograms and their partitions
An introspective algorithm for the integer determinant
We present an algorithm computing the determinant of an integer matrix A. The
algorithm is introspective in the sense that it uses several distinct
algorithms that run in a concurrent manner. During the course of the algorithm
partial results coming from distinct methods can be combined. Then, depending
on the current running time of each method, the algorithm can emphasize a
particular variant. With the use of very fast modular routines for linear
algebra, our implementation is an order of magnitude faster than other existing
implementations. Moreover, we prove that the expected complexity of our
algorithm is only O(n^3 log^{2.5}(n ||A||)) bit operations in the dense case
and O(Omega n^{1.5} log^2(n ||A||) + n^{2.5}log^3(n||A||)) in the sparse case,
where ||A|| is the largest entry in absolute value of the matrix and Omega is
the cost of matrix-vector multiplication in the case of a sparse matrix.Comment: Published in Transgressive Computing 2006, Grenade : Espagne (2006
Fast algorithms for computing with integer matrices: normal forms and applications
The focus of this thesis is on fundamental computational problems in exact integer linear algebra. Specifically, for a nonsingular integer input matrix A of dimension n, we consider problems such as linear system solving and computing integer matrix normal forms.
Our goal is to design algorithms that have complexity about the same as the cost of multiplying together two integer matrices of the same dimension and size of entries as the input matrix A. If 2 ≤ ω ≤ 3 is a valid exponent for matrix multiplication, that is, if two n × n matrices can be multiplied in O(n^ω) basic operations from the domain of entries, then our target complexity is O(n^ω log ||A||) bit operations, up to some missing log n and loglog ||A|| factors. Here ||A|| denotes the largest entry in A in absolute value.
The first contribution is solving the problem of computing the Smith normal form S of a nonsingular matrix A along with computing unimodular matrices U, V such that AV = US within our target cost. The algorithm we give is a Las Vegas probabilistic algorithm which means that we are able to verify the correctness of its output.
The second contribution of the thesis is with respect to linear system solving. We present a deterministic reduction to matrix multiplication for the problem of linear system solving: given as input a nonsingular A and a vector b, solve the system Ax = b. The system solution x is computed within our target complexity
Recent progress in linear algebra and lattice basis reduction (invited)
International audienceA general goal concerning fundamental linear algebra problems is to reduce the complexity estimates to essentially the same as that of multiplying two matrices (plus possibly a cost related to the input and output sizes). Among the bottlenecks one usually finds the questions of designing a recursive approach and mastering the sizes of the intermediately computed data. In this talk we are interested in two special cases of lattice basis reduction. We consider bases given by square matrices over K[x] or Z, with, respectively, the notion of reduced form and LLL reduction. Our purpose is to introduce basic tools for understanding how to generalize the Lehmer and Knuth-Schönhage gcd algorithms for basis reduction. Over K[x] this generalization is a key ingredient for giving a basis reduction algorithm whose complexity estimate is essentially that of multiplying two polynomial matrices. Such a problem relation between integer basis reduction and integer matrix multiplication is not known. The topic receives a lot of attention, and recent results on the subject show that there might be room for progressing on the question
Reliable Linear, Sesquilinear and Bijective Operations On Integer Data Streams Via Numerical Entanglement
A new technique is proposed for fault-tolerant linear, sesquilinear and
bijective (LSB) operations on integer data streams (), such as:
scaling, additions/subtractions, inner or outer vector products, permutations
and convolutions. In the proposed method, the input integer data streams
are linearly superimposed to form numerically-entangled integer data
streams that are stored in-place of the original inputs. A series of LSB
operations can then be performed directly using these entangled data streams.
The results are extracted from the entangled output streams by additions
and arithmetic shifts. Any soft errors affecting any single disentangled output
stream are guaranteed to be detectable via a specific post-computation
reliability check. In addition, when utilizing a separate processor core for
each of the streams, the proposed approach can recover all outputs after
any single fail-stop failure. Importantly, unlike algorithm-based fault
tolerance (ABFT) methods, the number of operations required for the
entanglement, extraction and validation of the results is linearly related to
the number of the inputs and does not depend on the complexity of the performed
LSB operations. We have validated our proposal in an Intel processor (Haswell
architecture with AVX2 support) via fast Fourier transforms, circular
convolutions, and matrix multiplication operations. Our analysis and
experiments reveal that the proposed approach incurs between to
reduction in processing throughput for a wide variety of LSB operations. This
overhead is 5 to 1000 times smaller than that of the equivalent ABFT method
that uses a checksum stream. Thus, our proposal can be used in fault-generating
processor hardware or safety-critical applications, where high reliability is
required without the cost of ABFT or modular redundancy.Comment: to appear in IEEE Trans. on Signal Processing, 201
Efficient Decomposition of Dense Matrices over GF(2)
In this work we describe an efficient implementation of a hierarchy of
algorithms for the decomposition of dense matrices over the field with two
elements (GF(2)). Matrix decomposition is an essential building block for
solving dense systems of linear and non-linear equations and thus much research
has been devoted to improve the asymptotic complexity of such algorithms. In
this work we discuss an implementation of both well-known and improved
algorithms in the M4RI library. The focus of our discussion is on a new variant
of the M4RI algorithm - denoted MMPF in this work -- which allows for
considerable performance gains in practice when compared to the previously
fastest implementation. We provide performance figures on x86_64 CPUs to
demonstrate the viability of our approach
- …