4 research outputs found

    Faster arbitrary-precision dot product and matrix multiplication

    Get PDF
    International audienceWe present algorithms for real and complex dot product and matrix multiplication in arbitrary-precision floating-point and ball arithmetic. A low-overhead dot product is implemented on the level of GMP limb arrays; it is about twice as fast as previous code in MPFR and Arb at precision up to several hundred bits. Up to 128 bits, it is 3-4 times as fast, costing 20-30 cycles per term for floating-point evaluation and 40-50 cycles per term for balls. We handle large matrix multiplications even more efficiently via blocks of scaled integer matrices. The new methods are implemented in Arb and significantly speed up polynomial operations and linear algebra

    Faster arbitrary-precision dot product and matrix multiplication

    No full text
    We present algorithms for real and complex dot product and matrix multiplication in arbitrary-precision floating-point and ball arithmetic. A low-overhead dot product is implemented on the level of GMP limb arrays; it is about twice as fast as previous code in MPFR and Arb at precision up to several hundred bits. Up to 128 bits, it is 3-4 times as fast, costing 20-30 cycles per term for floating-point evaluation and 40-50 cycles per term for balls. We handle large matrix multiplications even more efficiently via blocks of scaled integer matrices. The new methods are implemented in Arb and significantly speed up polynomial operations and linear algebra

    Faster arbitrary-precision dot product and matrix multiplication

    No full text
    International audienceWe present algorithms for real and complex dot product and matrix multiplication in arbitrary-precision floating-point and ball arithmetic. A low-overhead dot product is implemented on the level of GMP limb arrays; it is about twice as fast as previous code in MPFR and Arb at precision up to several hundred bits. Up to 128 bits, it is 3-4 times as fast, costing 20-30 cycles per term for floating-point evaluation and 40-50 cycles per term for balls. We handle large matrix multiplications even more efficiently via blocks of scaled integer matrices. The new methods are implemented in Arb and significantly speed up polynomial operations and linear algebra

    Arbitrary-precision computation of the gamma function

    Get PDF
    We discuss the best methods available for computing the gamma function Γ(z)\Gamma(z) in arbitrary-precision arithmetic with rigorous error bounds. We address different cases: rational, algebraic, real or complex arguments; large or small arguments; low or high precision; with or without precomputation. The methods also cover the log-gamma function logΓ(z)\log \Gamma(z), the digamma function ψ(z)\psi(z), and derivatives Γ(n)(z)\Gamma^{(n)}(z) and ψ(n)(z)\psi^{(n)}(z). Besides attempting to summarize the existing state of the art, we present some new formulas, estimates, bounds and algorithmic improvements and discuss implementation results
    corecore