385 research outputs found

    Quasi-GCD computations

    Get PDF
    AbstractFor univariate polynomials with real or complex coefficients and a given error bound Ï” > 0, h is called a quasi-gcd of f and g, if h is an Ï”-approximate divisor of f and of g and if any (exact) common divisor of f, g is an approximate divisor of h. Extended quasi-gcd computation means to find such h and additional cofactors u, Îœ such that | uf + Îœg − h | < Ï” | h | holds. Suitable “pivoting” leads to a numerically stable version of Euclid's algorithm for solving this task. Further refinements by a divide-and-conquer technique and by means of fast algorithms for polynomial arithmetic then yield the worst case upper bound O(n2 lg n(lg(1/Ï”) + n lg n)) of “pointer time” for nth-degree polynomials. In the particular case of integer polynomials, however, an immediate reduction to fast integer gcd computation is recommended, instead

    An O(M(n) log n) algorithm for the Jacobi symbol

    Get PDF
    The best known algorithm to compute the Jacobi symbol of two n-bit integers runs in time O(M(n) log n), using Sch\"onhage's fast continued fraction algorithm combined with an identity due to Gauss. We give a different O(M(n) log n) algorithm based on the binary recursive gcd algorithm of Stehl\'e and Zimmermann. Our implementation - which to our knowledge is the first to run in time O(M(n) log n) - is faster than GMP's quadratic implementation for inputs larger than about 10000 decimal digits.Comment: Submitted to ANTS IX (Nancy, July 2010

    Finding the median

    Get PDF
    An algorithm is described which determines the median of n elements using in the worst case a number of comparison asymptotic to 3n

    How Fast Can We Multiply Large Integers on an Actual Computer?

    Full text link
    We provide two complexity measures that can be used to measure the running time of algorithms to compute multiplications of long integers. The random access machine with unit or logarithmic cost is not adequate for measuring the complexity of a task like multiplication of long integers. The Turing machine is more useful here, but fails to take into account the multiplication instruction for short integers, which is available on physical computing devices. An interesting outcome is that the proposed refined complexity measures do not rank the well known multiplication algorithms the same way as the Turing machine model.Comment: To appear in the proceedings of Latin 2014. Springer LNCS 839

    Elliptic periods for finite fields

    Full text link
    We construct two new families of basis for finite field extensions. Basis in the first family, the so-called elliptic basis, are not quite normal basis, but they allow very fast Frobenius exponentiation while preserving sparse multiplication formulas. Basis in the second family, the so-called normal elliptic basis are normal basis and allow fast (quasi linear) arithmetic. We prove that all extensions admit models of this kind

    Accelerating the CM method

    Full text link
    Given a prime q and a negative discriminant D, the CM method constructs an elliptic curve E/\Fq by obtaining a root of the Hilbert class polynomial H_D(X) modulo q. We consider an approach based on a decomposition of the ring class field defined by H_D, which we adapt to a CRT setting. This yields two algorithms, each of which obtains a root of H_D mod q without necessarily computing any of its coefficients. Heuristically, our approach uses asymptotically less time and space than the standard CM method for almost all D. Under the GRH, and reasonable assumptions about the size of log q relative to |D|, we achieve a space complexity of O((m+n)log q) bits, where mn=h(D), which may be as small as O(|D|^(1/4)log q). The practical efficiency of the algorithms is demonstrated using |D| > 10^16 and q ~ 2^256, and also |D| > 10^15 and q ~ 2^33220. These examples are both an order of magnitude larger than the best previous results obtained with the CM method.Comment: 36 pages, minor edits, to appear in the LMS Journal of Computation and Mathematic

    Combining All Pairs Shortest Paths and All Pairs Bottleneck Paths Problems

    Full text link
    We introduce a new problem that combines the well known All Pairs Shortest Paths (APSP) problem and the All Pairs Bottleneck Paths (APBP) problem to compute the shortest paths for all pairs of vertices for all possible flow amounts. We call this new problem the All Pairs Shortest Paths for All Flows (APSP-AF) problem. We firstly solve the APSP-AF problem on directed graphs with unit edge costs and real edge capacities in O~(tn(ω+9)/4)=O~(tn2.843)\tilde{O}(\sqrt{t}n^{(\omega+9)/4}) = \tilde{O}(\sqrt{t}n^{2.843}) time, where nn is the number of vertices, tt is the number of distinct edge capacities (flow amounts) and O(nω)<O(n2.373)O(n^{\omega}) < O(n^{2.373}) is the time taken to multiply two nn-by-nn matrices over a ring. Secondly we extend the problem to graphs with positive integer edge costs and present an algorithm with O~(tc(ω+5)/4n(ω+9)/4)=O~(tc1.843n2.843)\tilde{O}(\sqrt{t}c^{(\omega+5)/4}n^{(\omega+9)/4}) = \tilde{O}(\sqrt{t}c^{1.843}n^{2.843}) worst case time complexity, where cc is the upper bound on edge costs

    Rapid computation of L-functions for modular forms

    Full text link
    Let ff be a fixed (holomorphic or Maass) modular cusp form, with LL-function L(f,s)L(f,s). We describe an algorithm that computes the value L(f,1/2+iT)L(f,1/2+ iT) to any specified precision in time O(1+∣T∣7/8)O(1+|T|^{7/8})

    A Randomized Sublinear Time Parallel GCD Algorithm for the EREW PRAM

    Get PDF
    We present a randomized parallel algorithm that computes the greatest common divisor of two integers of n bits in length with probability 1-o(1) that takes O(n loglog n / log n) expected time using n^{6+\epsilon} processors on the EREW PRAM parallel model of computation. We believe this to be the first randomized sublinear time algorithm on the EREW PRAM for this problem
    • 

    corecore