1,435 research outputs found

    Lacunaryx: Computing bounded-degree factors of lacunary polynomials

    Full text link
    In this paper, we report on an implementation in the free software Mathemagix of lacunary factorization algorithms, distributed as a library called Lacunaryx. These algorithms take as input a polynomial in sparse representation, that is as a list of nonzero monomials, and an integer dd, and compute its irreducible degree-d\le d factors. The complexity of these algorithms is polynomial in the sparse size of the input polynomial and dd.Comment: 6 page

    Development of symbolic algorithms for certain algebraic processes

    Get PDF
    This study investigates the problem of computing the exact greatest common divisor of two polynomials relative to an orthogonal basis, defined over the rational number field. The main objective of the study is to design and implement an effective and efficient symbolic algorithm for the general class of dense polynomials, given the rational number defining terms of their basis. From a general algorithm using the comrade matrix approach, the nonmodular and modular techniques are prescribed. If the coefficients of the generalized polynomials are multiprecision integers, multiprecision arithmetic will be required in the construction of the comrade matrix and the corresponding systems coefficient matrix. In addition, the application of the nonmodular elimination technique on this coefficient matrix extensively applies multiprecision rational number operations. The modular technique is employed to minimize the complexity involved in such computations. A divisor test algorithm that enables the detection of an unlucky reduction is a crucial device for an effective implementation of the modular technique. With the bound of the true solution not known a priori, the test is devised and carefully incorporated into the modular algorithm. The results illustrate that the modular algorithm illustrate its best performance for the class of relatively prime polynomials. The empirical computing time results show that the modular algorithm is markedly superior to the nonmodular algorithms in the case of sufficiently dense Legendre basis polynomials with a small GCD solution. In the case of dense Legendre basis polynomials with a big GCD solution, the modular algorithm is significantly superior to the nonmodular algorithms in higher degree polynomials. For more definitive conclusions, the computing time functions of the algorithms that are presented in this report have been worked out. Further investigations have also been suggested

    On the asymptotic and practical complexity of solving bivariate systems over the reals

    Get PDF
    This paper is concerned with exact real solving of well-constrained, bivariate polynomial systems. The main problem is to isolate all common real roots in rational rectangles, and to determine their intersection multiplicities. We present three algorithms and analyze their asymptotic bit complexity, obtaining a bound of \sOB(N^{14}) for the purely projection-based method, and \sOB(N^{12}) for two subresultant-based methods: this notation ignores polylogarithmic factors, where NN bounds the degree and the bitsize of the polynomials. The previous record bound was \sOB(N^{14}). Our main tool is signed subresultant sequences. We exploit recent advances on the complexity of univariate root isolation, and extend them to sign evaluation of bivariate polynomials over two algebraic numbers, and real root counting for polynomials over an extension field. Our algorithms apply to the problem of simultaneous inequalities; they also compute the topology of real plane algebraic curves in \sOB(N^{12}), whereas the previous bound was \sOB(N^{14}). All algorithms have been implemented in MAPLE, in conjunction with numeric filtering. We compare them against FGB/RS, system solvers from SYNAPS, and MAPLE libraries INSULATE and TOP, which compute curve topology. Our software is among the most robust, and its runtimes are comparable, or within a small constant factor, with respect to the C/C++ libraries. Key words: real solving, polynomial systems, complexity, MAPLE softwareComment: 17 pages, 4 algorithms, 1 table, and 1 figure with 2 sub-figure

    Sparse implicitization by interpolation: Characterizing non-exactness and an application to computing discriminants

    Get PDF
    We revisit implicitization by interpolation in order to examine its properties in the context of sparse elimination theory. Based on the computation of a superset of the implicit support, implicitization is reduced to computing the nullspace of a numeric matrix. The approach is applicable to polynomial and rational parameterizations of curves and (hyper)surfaces of any dimension, including the case of parameterizations with base points. Our support prediction is based on sparse (or toric) resultant theory, in order to exploit the sparsity of the input and the output. Our method may yield a multiple of the implicit equation: we characterize and quantify this situation by relating the nullspace dimension to the predicted support and its geometry. In this case, we obtain more than one multiples of the implicit equation; the latter can be obtained via multivariate polynomial gcd (or factoring). All of the above techniques extend to the case of approximate computation, thus yielding a method of sparse approximate implicitization, which is important in tackling larger problems. We discuss our publicly available Maple implementation through several examples, including the benchmark of bicubic surface. For a novel application, we focus on computing the discriminant of a multivariate polynomial, which characterizes the existence of multiple roots and generalizes the resultant of a polynomial system. This yields an efficient, output-sensitive algorithm for computing the discriminant polynomial
    corecore