54,241 research outputs found

    Efficient Identity Testing and Polynomial Factorization in Nonassociative Free Rings

    Get PDF
    In this paper we study arithmetic computations in the nonassociative, and noncommutative free polynomial ring F{X}. Prior to this work, nonassociative arithmetic computation was considered by Hrubes, Wigderson, and Yehudayoff, and they showed lower bounds and proved completeness results. We consider Polynomial Identity Testing and Polynomial Factorization in F{X} and show the following results. 1. Given an arithmetic circuit C computing a polynomial f in F{X} of degree d, we give a deterministic polynomial algorithm to decide if f is identically zero. Our result is obtained by a suitable adaptation of the PIT algorithm of Raz and Shpilka for noncommutative ABPs. 2. Given an arithmetic circuit C computing a polynomial f in F{X} of degree d, we give an efficient deterministic algorithm to compute circuits for the irreducible factors of f in polynomial time when F is the field of rationals. Over finite fields of characteristic p, our algorithm runs in time polynomial in input size and p

    On the complexity of polynomial reduction

    No full text
    In this paper, we present a new algorithm for reducing a multivariate polynomial with respect to an autoreduced tuple of other polynomials. In a suitable sparse complexity model, it is shown that the execution time is essentially the same (up to a logarithmic factor) as the time needed to verify that the result is correct. This is a first step towards making advantage of fast sparse polynomial arithmetic for the computation of Gröbner bases

    On the efficient parallel computation of Legendre transforms

    Get PDF
    In this article, we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the accuracy, efficiency, and scalability of our implementation. The algorithms were implemented in ANSI C using the BSPlib communications library. We also present a new algorithm for computing the cosine transform of two vectors at the same time

    On the expressive power of planar perfect matching and permanents of bounded treewidth matrices

    Get PDF
    Valiant introduced some 25 years ago an algebraic model of computation along with the complexity classes VP and VNP, which can be viewed as analogues of the classical classes P and NP. They are defined using non-uniform sequences of arithmetic circuits and provides a framework to study the complexity for sequences of polynomials. Prominent examples of difficult (that is, VNP-complete) problems in this model includes the permanent and hamiltonian polynomials. While the permanent and hamiltonian polynomials in general are difficult to evaluate, there have been research on which special cases of these polynomials admits efficient evaluation. For instance, Barvinok has shown that if the underlying matrix has bounded rank, both the permanent and the hamiltonian polynomials can be evaluated in polynomial time, and thus are in VP. Courcelle, Makowsky and Rotics have shown that for matrices of bounded treewidth several difficult problems (including evaluating the permanent and hamiltonian polynomials) can be solved efficiently. An earlier result of this flavour is Kasteleyn's theorem which states that the sum of weights of perfect matchings of a planar graph can be computed in polynomial time, and thus is in VP also. For general graphs this problem is VNP-complete. In this paper we investigate the expressive power of the above results. We show that the permanent and hamiltonian polynomials for matrices of bounded treewidth both are equivalent to arithmetic formulas. Also, arithmetic weakly skew circuits are shown to be equivalent to the sum of weights of perfect matchings of planar graphs.Comment: 14 page

    High Performance Sparse Multivariate Polynomials: Fundamental Data Structures and Algorithms

    Get PDF
    Polynomials may be represented sparsely in an effort to conserve memory usage and provide a succinct and natural representation. Moreover, polynomials which are themselves sparse – have very few non-zero terms – will have wasted memory and computation time if represented, and operated on, densely. This waste is exacerbated as the number of variables increases. We provide practical implementations of sparse multivariate data structures focused on data locality and cache complexity. We look to develop high-performance algorithms and implementations of fundamental polynomial operations, using these sparse data structures, such as arithmetic (addition, subtraction, multiplication, and division) and interpolation. We revisit a sparse arithmetic scheme introduced by Johnson in 1974, adapting and optimizing these algorithms for modern computer architectures, with our implementations over the integers and rational numbers vastly outperforming the current wide-spread implementations. We develop a new algorithm for sparse pseudo-division based on the sparse polynomial division algorithm, with very encouraging results. Polynomial interpolation is explored through univariate, dense multivariate, and sparse multivariate methods. Arithmetic and interpolation together form a solid high-performance foundation from which many higher-level and more interesting algorithms can be built

    Interpolation in Valiant's theory

    Get PDF
    We investigate the following question: if a polynomial can be evaluated at rational points by a polynomial-time boolean algorithm, does it have a polynomial-size arithmetic circuit? We argue that this question is certainly difficult. Answering it negatively would indeed imply that the constant-free versions of the algebraic complexity classes VP and VNP defined by Valiant are different. Answering this question positively would imply a transfer theorem from boolean to algebraic complexity. Our proof method relies on Lagrange interpolation and on recent results connecting the (boolean) counting hierarchy to algebraic complexity classes. As a byproduct we obtain two additional results: (i) The constant-free, degree-unbounded version of Valiant's hypothesis that VP and VNP differ implies the degree-bounded version. This result was previously known to hold for fields of positive characteristic only. (ii) If exponential sums of easy to compute polynomials can be computed efficiently, then the same is true of exponential products. We point out an application of this result to the P=NP problem in the Blum-Shub-Smale model of computation over the field of complex numbers.Comment: 13 page

    The complexity of class polynomial computation via floating point approximations

    Get PDF
    We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmetic-geometric mean. It runs in time O(Dlog5DloglogD)=O(D1+ϵ)=O(h2+ϵ)O (|D| \log^5 |D| \log \log |D|) = O (|D|^{1 + \epsilon}) = O (h^{2 + \epsilon}) for any ϵ>0\epsilon > 0, where DD is the CM discriminant and hh is the degree of the class polynomial. Another fast algorithm uses multipoint evaluation techniques known from symbolic computation; its asymptotic complexity is worse by a factor of logD\log |D|. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary-quadratic order and on a rigorously proven upper bound for the height of class polynomials

    Computing the partition function of the Sherrington-Kirkpatrick model is hard on average

    Full text link
    We establish the average-case hardness of the algorithmic problem of exact computation of the partition function associated with the Sherrington-Kirkpatrick model of spin glasses with Gaussian couplings and random external field. In particular, we establish that unless P=#PP= \#P, there does not exist a polynomial-time algorithm to exactly compute the partition function on average. This is done by showing that if there exists a polynomial time algorithm, which exactly computes the partition function for inverse polynomial fraction (1/nO(1)1/n^{O(1)}) of all inputs, then there is a polynomial time algorithm, which exactly computes the partition function for all inputs, with high probability, yielding P=#PP=\#P. The computational model that we adopt is {\em finite-precision arithmetic}, where the algorithmic inputs are truncated first to a certain level NN of digital precision. The ingredients of our proof include the random and downward self-reducibility of the partition function with random external field; an argument of Cai et al. \cite{cai1999hardness} for establishing the average-case hardness of computing the permanent of a matrix; a list-decoding algorithm of Sudan \cite{sudan1996maximum}, for reconstructing polynomials intersecting a given list of numbers at sufficiently many points; and near-uniformity of the log-normal distribution, modulo a large prime pp. To the best of our knowledge, our result is the first one establishing a provable hardness of a model arising in the field of spin glasses. Furthermore, we extend our result to the same problem under a different {\em real-valued} computational model, e.g. using a Blum-Shub-Smale machine \cite{blum1988theory} operating over real-valued inputs.Comment: 31 page
    corecore