13,849 research outputs found

    Computational linear algebra over finite fields

    Get PDF
    We present here algorithms for efficient computation of linear algebra problems over finite fields

    Exact Sparse Matrix-Vector Multiplication on GPU's and Multicore Architectures

    Full text link
    We propose different implementations of the sparse matrix--dense vector multiplication (\spmv{}) for finite fields and rings \Zb/m\Zb. We take advantage of graphic card processors (GPU) and multi-core architectures. Our aim is to improve the speed of \spmv{} in the \linbox library, and henceforth the speed of its black box algorithms. Besides, we use this and a new parallelization of the sigma-basis algorithm in a parallel block Wiedemann rank implementation over finite fields

    NumGfun: a Package for Numerical and Analytic Computation with D-finite Functions

    Get PDF
    This article describes the implementation in the software package NumGfun of classical algorithms that operate on solutions of linear differential equations or recurrence relations with polynomial coefficients, including what seems to be the first general implementation of the fast high-precision numerical evaluation algorithms of Chudnovsky & Chudnovsky. In some cases, our descriptions contain improvements over existing algorithms. We also provide references to relevant ideas not currently used in NumGfun

    Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix

    Get PDF
    Given a nonsingular n×nn \times n matrix of univariate polynomials over a field K\mathbb{K}, we give fast and deterministic algorithms to compute its determinant and its Hermite normal form. Our algorithms use O~(nωs)\widetilde{\mathcal{O}}(n^\omega \lceil s \rceil) operations in K\mathbb{K}, where ss is bounded from above by both the average of the degrees of the rows and that of the columns of the matrix and ω\omega is the exponent of matrix multiplication. The soft-OO notation indicates that logarithmic factors in the big-OO are omitted while the ceiling function indicates that the cost is O~(nω)\widetilde{\mathcal{O}}(n^\omega) when s=o(1)s = o(1). Our algorithms are based on a fast and deterministic triangularization method for computing the diagonal entries of the Hermite form of a nonsingular matrix.Comment: 34 pages, 3 algorithm

    Asymptotically fast polynomial matrix algorithms for multivariable systems

    Full text link
    We present the asymptotically fastest known algorithms for some basic problems on univariate polynomial matrices: rank, nullspace, determinant, generic inverse, reduced form. We show that they essentially can be reduced to two computer algebra techniques, minimal basis computations and matrix fraction expansion/reconstruction, and to polynomial matrix multiplication. Such reductions eventually imply that all these problems can be solved in about the same amount of time as polynomial matrix multiplication
    corecore