1,104 research outputs found

    Chunky and Equal-Spaced Polynomial Multiplication

    Get PDF
    Finding the product of two polynomials is an essential and basic problem in computer algebra. While most previous results have focused on the worst-case complexity, we instead employ the technique of adaptive analysis to give an improvement in many "easy" cases. We present two adaptive measures and methods for polynomial multiplication, and also show how to effectively combine them to gain both advantages. One useful feature of these algorithms is that they essentially provide a gradient between existing "sparse" and "dense" methods. We prove that these approaches provide significant improvements in many cases but in the worst case are still comparable to the fastest existing algorithms.Comment: 23 Pages, pdflatex, accepted to Journal of Symbolic Computation (JSC

    High Performance Sparse Multivariate Polynomials: Fundamental Data Structures and Algorithms

    Get PDF
    Polynomials may be represented sparsely in an effort to conserve memory usage and provide a succinct and natural representation. Moreover, polynomials which are themselves sparse – have very few non-zero terms – will have wasted memory and computation time if represented, and operated on, densely. This waste is exacerbated as the number of variables increases. We provide practical implementations of sparse multivariate data structures focused on data locality and cache complexity. We look to develop high-performance algorithms and implementations of fundamental polynomial operations, using these sparse data structures, such as arithmetic (addition, subtraction, multiplication, and division) and interpolation. We revisit a sparse arithmetic scheme introduced by Johnson in 1974, adapting and optimizing these algorithms for modern computer architectures, with our implementations over the integers and rational numbers vastly outperforming the current wide-spread implementations. We develop a new algorithm for sparse pseudo-division based on the sparse polynomial division algorithm, with very encouraging results. Polynomial interpolation is explored through univariate, dense multivariate, and sparse multivariate methods. Arithmetic and interpolation together form a solid high-performance foundation from which many higher-level and more interesting algorithms can be built

    On exact division and divisibility testing for sparse polynomials

    Full text link
    No polynomial-time algorithm is known to test whether a sparse polynomial G divides another sparse polynomial FF. While computing the quotient Q=F quo G can be done in polynomial time with respect to the sparsities of F, G and Q, this is not yet sufficient to get a polynomial-time divisibility test in general. Indeed, the sparsity of the quotient Q can be exponentially larger than the ones of F and G. In the favorable case where the sparsity #Q of the quotient is polynomial, the best known algorithm to compute Q has a non-linear factor #G#Q in the complexity, which is not optimal. In this work, we are interested in the two aspects of this problem. First, we propose a new randomized algorithm that computes the quotient of two sparse polynomials when the division is exact. Its complexity is quasi-linear in the sparsities of F, G and Q. Our approach relies on sparse interpolation and it works over any finite field or the ring of integers. Then, as a step toward faster divisibility testing, we provide a new polynomial-time algorithm when the divisor has a specific shape. More precisely, we reduce the problem to finding a polynomial S such that QS is sparse and testing divisibility by S can be done in polynomial time. We identify some structure patterns in the divisor G for which we can efficiently compute such a polynomial~S

    Highly Scalable Multiplication for Distributed Sparse Multivariate Polynomials on Many-core Systems

    Full text link
    We present a highly scalable algorithm for multiplying sparse multivariate polynomials represented in a distributed format. This algo- rithm targets not only the shared memory multicore computers, but also computers clusters or specialized hardware attached to a host computer, such as graphics processing units or many-core coprocessors. The scal- ability on the large number of cores is ensured by the lacks of synchro- nizations, locks and false-sharing during the main parallel step.Comment: 15 pages, 5 figure

    FORM version 4.0

    Full text link
    We present version 4.0 of the symbolic manipulation system FORM. The most important new features are manipulation of rational polynomials and the factorization of expressions. Many other new functions and commands are also added; some of them are very general, while others are designed for building specific high level packages, such as one for Groebner bases. New is also the checkpoint facility, that allows for periodic backups during long calculations. Lastly, FORM 4.0 has become available as open source under the GNU General Public License version 3.Comment: 26 pages. Uses axodra

    Some Recent Advances in Network Flows

    Get PDF
    The literature on network flow problems is extensive, and over the past 40 years researchers have made continuous improvements to algorithms for solving several classes of problems. However, the surge of activity on the algorithmic aspects of network flow problems over the past few years has been particularly striking. Several techniques have proven to be very successful in permitting researchers to make these recent contributions: (i) scaling of the problem data; (ii) improved analysis of algorithms, especially amortized average case performance and the use of potential functions; and (iii) enhanced data structures. In this survey, we illustrate some of these techniques and their usefulness in developing faster network flow algorithms. Our discussion focuses on the design of faster algorithms from the worst case perspective and we limit our discussion to the following fundamental problems: the shortest path problem, the maximum flow problem, and the minimum cost flow problem. We consider several representative algorithms from each problem class including the radix heap algorithm for the shortest path problem, preflow push algorithms for the maximum flow problem, and the pseudoflow push algorithms for the minimum cost flow problem

    Extremal Optimization for Graph Partitioning

    Full text link
    Extremal optimization is a new general-purpose method for approximating solutions to hard optimization problems. We study the method in detail by way of the NP-hard graph partitioning problem. We discuss the scaling behavior of extremal optimization, focusing on the convergence of the average run as a function of runtime and system size. The method has a single free parameter, which we determine numerically and justify using a simple argument. Our numerical results demonstrate that on random graphs, extremal optimization maintains consistent accuracy for increasing system sizes, with an approximation error decreasing over runtime roughly as a power law t^(-0.4). On geometrically structured graphs, the scaling of results from the average run suggests that these are far from optimal, with large fluctuations between individual trials. But when only the best runs are considered, results consistent with theoretical arguments are recovered.Comment: 34 pages, RevTex4, 1 table and 20 ps-figures included, related papers available at http://www.physics.emory.edu/faculty/boettcher
    • …
    corecore