3 research outputs found

    Fast Computation of Smith Forms of Sparse Matrices Over Local Rings

    Full text link
    We present algorithms to compute the Smith Normal Form of matrices over two families of local rings. The algorithms use the \emph{black-box} model which is suitable for sparse and structured matrices. The algorithms depend on a number of tools, such as matrix rank computation over finite fields, for which the best-known time- and memory-efficient algorithms are probabilistic. For an \nxn matrix AA over the ring \Fzfe, where fef^e is a power of an irreducible polynomial f \in \Fz of degree dd, our algorithm requires \bigO(\eta de^2n) operations in \F, where our black-box is assumed to require \bigO(\eta) operations in \F to compute a matrix-vector product by a vector over \Fzfe (and η\eta is assumed greater than \Pden). The algorithm only requires additional storage for \bigO(\Pden) elements of \F. In particular, if \eta=\softO(\Pden), then our algorithm requires only \softO(n^2d^2e^3) operations in \F, which is an improvement on known dense methods for small dd and ee. For the ring \ZZ/p^e\ZZ, where pp is a prime, we give an algorithm which is time- and memory-efficient when the number of nontrivial invariant factors is small. We describe a method for dimension reduction while preserving the invariant factors. The time complexity is essentially linear in μnrelogp,\mu n r e \log p, where μ\mu is the number of operations in \ZZ/p\ZZ to evaluate the black-box (assumed greater than nn) and rr is the total number of non-zero invariant factors. To avoid the practical cost of conditioning, we give a Monte Carlo certificate, which at low cost, provides either a high probability of success or a proof of failure. The quest for a time- and memory-efficient solution without restrictions on the number of nontrivial invariant factors remains open. We offer a conjecture which may contribute toward that end.Comment: Preliminary version to appear at ISSAC 201

    Sparse Gaussian Elimination modulo p: an Update

    Get PDF
    International audienceThis paper considers elimination algorithms for sparse matrices over finite fields. We mostly focus on computing the rank, because it raises the same challenges as solving linear systems, while being slightly simpler. We developed a new sparse elimination algorithm inspired by the Gilbert-Peierls sparse LU factorization, which is well-known in the numerical computation community. We benchmarked it against the usual right-looking sparse gaussian elimination and the Wiedemann algorithm using the Sparse Integer Matrix Collection of Jean-Guillaume Dumas. We obtain large speedups (1000× and more) on many cases. In particular , we are able to compute the rank of several large sparse matrices in seconds or minutes, compared to days with previous methods

    Algorithms for fast linear system solving and rank profile computation

    Get PDF
    We give randomized algorithms for linear algebra problems concerning an n*m input matrix A over a field K. We give an algorithm that simultaneously computes the row and column rank profiles of A in 2r^3 + (r^2+n+m+|A|)^{1+o(1)} field operations in K, where r is the rank of A and |A| denotes the number of nonzero entries of A. Here, the o(1) in our cost estimates captures some missing log n and log m factors. The rank profiles algorithm is randomized of the Monte Carlo type: the correct answer will be returned with probability at least 1/2. Given an n*1 vector b, we give an algorithm that either computes a particular solution vector x of dimension m*1 to the system Ax = b, or produces an inconsistency certificate vector u of dimension 1*n such that uA = 0 and ub is not equal to 0. The linear solver examines at most r+1 rows and r columns of A and has running time 2r^3 + (r^2+n + m + |R|+|C|)^{1+o(1)} field operations in K, where |R| and |C| are the number of nonzero entries in the rows and columns, respectively, that are examined. The solver is randomized of the Las Vegas type: an incorrect result is never returned, but the algorithm may report FAIL with probability at most 1/2. These cost estimates are achieved by making use of a novel randomized online data structure for the detection of linearly independent rows and columns. The leading term 2r^{3} in the cost estimate 2r^3 + (r^2+n+m+|A|)^{1+o(1)} of our rank profile algorithm arises from our use of an iterative algorithm to compute, for s=1,2,...,r, the inverse of the leading principal s*s submatrix B_s of an r*r matrix B that has generic rank profile, and whose rows are given from first to last, one at a time, for s=1,2,...,r. These inverses are used to compute a sequence of subsystem solutions B_{s}^{-1}b_s for s=1,2,...,r, where b_s is the leading subvector of b. We give a relaxed algorithm that computes the sequence B_1^{-1}b_1,\,B_2^{-1}b_2,..., B_r^{-1}b_r in an online fashion in time O(r^{\omega}), effectively allowing matrix multiplication to be incorporated into our rank profile algorithm. Together with a Toeplitz preconditioner, we can compute the row rank profile of a full column rank matrix A in time (r^{\omega}+|A|)^{1+o(1)}. Combined with Cheung, Kwok and Lau's (2013) algorithm for computing a maximal rank subset of linearly independent columns, this gives a Monte Carlo algorithm that computes the row rank profile of A in time (r^{\omega} + |A|)^{1+o(1)}
    corecore