11,166 research outputs found

    Implicitization of curves and (hyper)surfaces using predicted support

    Get PDF
    We reduce implicitization of rational planar parametric curves and (hyper)surfaces to linear algebra, by interpolating the coefficients of the implicit equation. For predicting the implicit support, we focus on methods that exploit input and output structure in the sense of sparse (or toric) elimination theory, namely by computing the Newton polytope of the implicit polynomial, via sparse resultant theory. Our algorithm works even in the presence of base points but, in this case, the implicit equation shall be obtained as a factor of the produced polynomial. We implement our methods on Maple, and some on Matlab as well, and study their numerical stability and efficiency on several classes of curves and surfaces. We apply our approach to approximate implicitization, and quantify the accuracy of the approximate output, which turns out to be satisfactory on all tested examples; we also relate our measures to Hausdorff distance. In building a square or rectangular matrix, an important issue is (over)sampling the given curve or surface: we conclude that unitary complexes offer the best tradeoff between speed and accuracy when numerical methods are employed, namely SVD, whereas for exact kernel computation random integers is the method of choice. We compare our prototype to existing software and find that it is rather competitive

    On the Complexity of Solving Zero-Dimensional Polynomial Systems via Projection

    Full text link
    Given a zero-dimensional polynomial system consisting of n integer polynomials in n variables, we propose a certified and complete method to compute all complex solutions of the system as well as a corresponding separating linear form l with coefficients of small bit size. For computing l, we need to project the solutions into one dimension along O(n) distinct directions but no further algebraic manipulations. The solutions are then directly reconstructed from the considered projections. The first step is deterministic, whereas the second step uses randomization, thus being Las-Vegas. The theoretical analysis of our approach shows that the overall cost for the two problems considered above is dominated by the cost of carrying out the projections. We also give bounds on the bit complexity of our algorithms that are exclusively stated in terms of the number of variables, the total degree and the bitsize of the input polynomials

    Computing the common zeros of two bivariate functions via Bezout resultants

    Get PDF
    The common zeros of two bivariate functions can be computed by finding the common zeros of their polynomial interpolants expressed in a tensor Chebyshev basis. From here we develop a bivariate rootfinding algorithm based on the hidden variable resultant method and B�ezout matrices with polynomial entries. Using techniques including domain subdivision, B�ezoutian regularization and local refinement we are able to reliably and accurately compute the simple common zeros of two smooth functions with polynomial interpolants of very high degree (�\ge 1000). We analyze the resultant method and its conditioning by noting that the B�ezout matrices are matrix polynomials. Our robust algorithm is implemented in the roots command in Chebfun2, a software package written in object-oriented MATLAB for computing with bivariate functions

    Discovering the roots: Uniform closure results for algebraic classes under factoring

    Full text link
    Newton iteration (NI) is an almost 350 years old recursive formula that approximates a simple root of a polynomial quite rapidly. We generalize it to a matrix recurrence (allRootsNI) that approximates all the roots simultaneously. In this form, the process yields a better circuit complexity in the case when the number of roots rr is small but the multiplicities are exponentially large. Our method sets up a linear system in rr unknowns and iteratively builds the roots as formal power series. For an algebraic circuit f(x1,,xn)f(x_1,\ldots,x_n) of size ss we prove that each factor has size at most a polynomial in: ss and the degree of the squarefree part of ff. Consequently, if f1f_1 is a 2Ω(n)2^{\Omega(n)}-hard polynomial then any nonzero multiple ifiei\prod_{i} f_i^{e_i} is equally hard for arbitrary positive eie_i's, assuming that ideg(fi)\sum_i \text{deg}(f_i) is at most 2O(n)2^{O(n)}. It is an old open question whether the class of poly(nn)-sized formulas (resp. algebraic branching programs) is closed under factoring. We show that given a polynomial ff of degree nO(1)n^{O(1)} and formula (resp. ABP) size nO(logn)n^{O(\log n)} we can find a similar size formula (resp. ABP) factor in randomized poly(nlognn^{\log n})-time. Consequently, if determinant requires nΩ(logn)n^{\Omega(\log n)} size formula, then the same can be said about any of its nonzero multiples. As part of our proofs, we identify a new property of multivariate polynomial factorization. We show that under a random linear transformation τ\tau, f(τx)f(\tau\overline{x}) completely factors via power series roots. Moreover, the factorization adapts well to circuit complexity analysis. This with allRootsNI are the techniques that help us make progress towards the old open problems, supplementing the large body of classical results and concepts in algebraic circuit factorization (eg. Zassenhaus, J.NT 1969, Kaltofen, STOC 1985-7 \& Burgisser, FOCS 2001).Comment: 33 Pages, No figure

    Sparse implicitization by interpolation: Characterizing non-exactness and an application to computing discriminants

    Get PDF
    We revisit implicitization by interpolation in order to examine its properties in the context of sparse elimination theory. Based on the computation of a superset of the implicit support, implicitization is reduced to computing the nullspace of a numeric matrix. The approach is applicable to polynomial and rational parameterizations of curves and (hyper)surfaces of any dimension, including the case of parameterizations with base points. Our support prediction is based on sparse (or toric) resultant theory, in order to exploit the sparsity of the input and the output. Our method may yield a multiple of the implicit equation: we characterize and quantify this situation by relating the nullspace dimension to the predicted support and its geometry. In this case, we obtain more than one multiples of the implicit equation; the latter can be obtained via multivariate polynomial gcd (or factoring). All of the above techniques extend to the case of approximate computation, thus yielding a method of sparse approximate implicitization, which is important in tackling larger problems. We discuss our publicly available Maple implementation through several examples, including the benchmark of bicubic surface. For a novel application, we focus on computing the discriminant of a multivariate polynomial, which characterizes the existence of multiple roots and generalizes the resultant of a polynomial system. This yields an efficient, output-sensitive algorithm for computing the discriminant polynomial

    The Multivariate Resultant is NP-hard in any Characteristic

    Get PDF
    The multivariate resultant is a fundamental tool of computational algebraic geometry. It can in particular be used to decide whether a system of n homogeneous equations in n variables is satisfiable (the resultant is a polynomial in the system's coefficients which vanishes if and only if the system is satisfiable). In this paper we present several NP-hardness results for testing whether a multivariate resultant vanishes, or equivalently for deciding whether a square system of homogeneous equations is satisfiable. Our main result is that testing the resultant for zero is NP-hard under deterministic reductions in any characteristic, for systems of low-degree polynomials with coefficients in the ground field (rather than in an extension). We also observe that in characteristic zero, this problem is in the Arthur-Merlin class AM if the generalized Riemann hypothesis holds true. In positive characteristic, the best upper bound remains PSPACE.Comment: 13 page

    Counting and computing regions of DD-decomposition: algebro-geometric approach

    Full text link
    New methods for DD-decomposition analysis are presented. They are based on topology of real algebraic varieties and computational real algebraic geometry. The estimate of number of root invariant regions for polynomial parametric families of polynomial and matrices is given. For the case of two parametric family more sharp estimate is proven. Theoretic results are supported by various numerical simulations that show higher precision of presented methods with respect to traditional ones. The presented methods are inherently global and could be applied for studying DD-decomposition for the space of parameters as a whole instead of some prescribed regions. For symbolic computations the Maple v.14 software and its package RegularChains are used.Comment: 16 pages, 8 figure
    corecore