244 research outputs found

    On Polynomial Multiplication in Chebyshev Basis

    Full text link
    In a recent paper Lima, Panario and Wang have provided a new method to multiply polynomials in Chebyshev basis which aims at reducing the total number of multiplication when polynomials have small degree. Their idea is to use Karatsuba's multiplication scheme to improve upon the naive method but without being able to get rid of its quadratic complexity. In this paper, we extend their result by providing a reduction scheme which allows to multiply polynomial in Chebyshev basis by using algorithms from the monomial basis case and therefore get the same asymptotic complexity estimate. Our reduction allows to use any of these algorithms without converting polynomials input to monomial basis which therefore provide a more direct reduction scheme then the one using conversions. We also demonstrate that our reduction is efficient in practice, and even outperform the performance of the best known algorithm for Chebyshev basis when polynomials have large degree. Finally, we demonstrate a linear time equivalence between the polynomial multiplication problem under monomial basis and under Chebyshev basis

    Formal proof for delayed finite field arithmetic using floating point operators

    Get PDF
    Formal proof checkers such as Coq are capable of validating proofs of correction of algorithms for finite field arithmetics but they require extensive training from potential users. The delayed solution of a triangular system over a finite field mixes operations on integers and operations on floating point numbers. We focus in this report on verifying proof obligations that state that no round off error occurred on any of the floating point operations. We use a tool named Gappa that can be learned in a matter of minutes to generate proofs related to floating point arithmetic and hide technicalities of formal proof checkers. We found that three facilities are missing from existing tools. The first one is the ability to use in Gappa new lemmas that cannot be easily expressed as rewriting rules. We coined the second one ``variable interchange'' as it would be required to validate loop interchanges. The third facility handles massive loop unrolling and argument instantiation by generating traces of execution for a large number of cases. We hope that these facilities may sometime in the future be integrated into mainstream code validation.Comment: 8th Conference on Real Numbers and Computers, Saint Jacques de Compostelle : Espagne (2008

    Exact Sparse Matrix-Vector Multiplication on GPU's and Multicore Architectures

    Full text link
    We propose different implementations of the sparse matrix--dense vector multiplication (\spmv{}) for finite fields and rings \Zb/m\Zb. We take advantage of graphic card processors (GPU) and multi-core architectures. Our aim is to improve the speed of \spmv{} in the \linbox library, and henceforth the speed of its black box algorithms. Besides, we use this and a new parallelization of the sigma-basis algorithm in a parallel block Wiedemann rank implementation over finite fields

    Solving Sparse Integer Linear Systems

    Get PDF
    We propose a new algorithm to solve sparse linear systems of equations over the integers. This algorithm is based on a pp-adic lifting technique combined with the use of block matrices with structured blocks. It achieves a sub-cubic complexity in terms of machine operations subject to a conjecture on the effectiveness of certain sparse projections. A LinBox-based implementation of this algorithm is demonstrated, and emphasizes the practical benefits of this new method over the previous state of the art

    Relaxing order basis computation

    No full text
    International audienceThe computation of an order basis (also called sigma basis) is a fundamental tool for linear algebra with polynomial coefficients. Such a computation is one of the key ingredients to provide algorithms which reduce to polynomial matrices multiplication. This has been the case for column reduction or minimal nullspace basis of polynomial matrix over a field. In this poster, we are interested in the application of order basis to compute minimal matrix generators of a linear matrix sequence. In particular, we focus on the linear matrix sequence used in the Block Wiedemann algorithm

    Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections

    Get PDF
    Block projections have been used, in [Eberly et al. 2006], to obtain an efficient algorithm to find solutions for sparse systems of linear equations. A bound of softO(n^(2.5)) machine operations is obtained assuming that the input matrix can be multiplied by a vector with constant-sized entries in softO(n) machine operations. Unfortunately, the correctness of this algorithm depends on the existence of efficient block projections, and this has been conjectured. In this paper we establish the correctness of the algorithm from [Eberly et al. 2006] by proving the existence of efficient block projections over sufficiently large fields. We demonstrate the usefulness of these projections by deriving improved bounds for the cost of several matrix problems, considering, in particular, ``sparse'' matrices that can be be multiplied by a vector using softO(n) field operations. We show how to compute the inverse of a sparse matrix over a field F using an expected number of softO(n^(2.27)) operations in F. A basis for the null space of a sparse matrix, and a certification of its rank, are obtained at the same cost. An application to Kaltofen and Villard's Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an integer matrix yields algorithms requiring softO(n^(2.66)) machine operations. The derived algorithms are all probabilistic of the Las Vegas type

    Fast In-place Algorithms for Polynomial Operations: Division, Evaluation, Interpolation

    Full text link
    We consider space-saving versions of several important operations on univariate polynomials, namely power series inversion and division, division with remainder, multi-point evaluation, and interpolation. Now-classical results show that such problems can be solved in (nearly) the same asymptotic time as fast polynomial multiplication. However, these reductions, even when applied to an in-place variant of fast polynomial multiplication, yield algorithms which require at least a linear amount of extra space for intermediate results. We demonstrate new in-place algorithms for the aforementioned polynomial computations which require only constant extra space and achieve the same asymptotic running time as their out-of-place counterparts. We also provide a precise complexity analysis so that all constants are made explicit, parameterized by the space usage of the underlying multiplication algorithms

    Essentially Optimal Sparse Polynomial Multiplication

    Full text link
    We present a probabilistic algorithm to compute the product of two univariate sparse polynomials over a field with a number of bit operations that is quasi-linear in the size of the input and the output. Our algorithm works for any field of characteristic zero or larger than the degree. We mainly rely on sparse interpolation and on a new algorithm for verifying a sparse product that has also a quasi-linear time complexity. Using Kronecker substitution techniques we extend our result to the multivariate case.Comment: 12 page
    • …
    corecore