557 research outputs found

    Implicitization of curves and (hyper)surfaces using predicted support

    Get PDF
    We reduce implicitization of rational planar parametric curves and (hyper)surfaces to linear algebra, by interpolating the coefficients of the implicit equation. For predicting the implicit support, we focus on methods that exploit input and output structure in the sense of sparse (or toric) elimination theory, namely by computing the Newton polytope of the implicit polynomial, via sparse resultant theory. Our algorithm works even in the presence of base points but, in this case, the implicit equation shall be obtained as a factor of the produced polynomial. We implement our methods on Maple, and some on Matlab as well, and study their numerical stability and efficiency on several classes of curves and surfaces. We apply our approach to approximate implicitization, and quantify the accuracy of the approximate output, which turns out to be satisfactory on all tested examples; we also relate our measures to Hausdorff distance. In building a square or rectangular matrix, an important issue is (over)sampling the given curve or surface: we conclude that unitary complexes offer the best tradeoff between speed and accuracy when numerical methods are employed, namely SVD, whereas for exact kernel computation random integers is the method of choice. We compare our prototype to existing software and find that it is rather competitive

    Sparse implicitization by interpolation: Characterizing non-exactness and an application to computing discriminants

    Get PDF
    We revisit implicitization by interpolation in order to examine its properties in the context of sparse elimination theory. Based on the computation of a superset of the implicit support, implicitization is reduced to computing the nullspace of a numeric matrix. The approach is applicable to polynomial and rational parameterizations of curves and (hyper)surfaces of any dimension, including the case of parameterizations with base points. Our support prediction is based on sparse (or toric) resultant theory, in order to exploit the sparsity of the input and the output. Our method may yield a multiple of the implicit equation: we characterize and quantify this situation by relating the nullspace dimension to the predicted support and its geometry. In this case, we obtain more than one multiples of the implicit equation; the latter can be obtained via multivariate polynomial gcd (or factoring). All of the above techniques extend to the case of approximate computation, thus yielding a method of sparse approximate implicitization, which is important in tackling larger problems. We discuss our publicly available Maple implementation through several examples, including the benchmark of bicubic surface. For a novel application, we focus on computing the discriminant of a multivariate polynomial, which characterizes the existence of multiple roots and generalizes the resultant of a polynomial system. This yields an efficient, output-sensitive algorithm for computing the discriminant polynomial

    TR-2012001: Algebraic Algorithms

    Full text link

    Improved algorithms for computing determinants and resultants

    Get PDF
    AbstractOur first contribution is a substantial acceleration of randomized computation of scalar, univariate, and multivariate matrix determinants, in terms of the output-sensitive bit operation complexity bounds, including computation modulo a product of random primes from a fixed range. This acceleration is dramatic in a critical application, namely solving polynomial systems and related studies, via computing the resultant. This is achieved by combining our techniques with the primitive-element method, which leads to an effective implicit representation of the roots. We systematically examine quotient formulae of Sylvester-type resultant matrices, including matrix polynomials and the u-resultant. We reduce the known bit operation complexity bounds by almost an order of magnitude, in terms of the resultant matrix dimension. Our theoretical and practical improvements cover the highly important cases of sparse and degenerate systems

    Sparse Gr\"obner Bases: the Unmixed Case

    Get PDF
    Toric (or sparse) elimination theory is a framework developped during the last decades to exploit monomial structures in systems of Laurent polynomials. Roughly speaking, this amounts to computing in a \emph{semigroup algebra}, \emph{i.e.} an algebra generated by a subset of Laurent monomials. In order to solve symbolically sparse systems, we introduce \emph{sparse Gr\"obner bases}, an analog of classical Gr\"obner bases for semigroup algebras, and we propose sparse variants of the F5F_5 and FGLM algorithms to compute them. Our prototype "proof-of-concept" implementation shows large speed-ups (more than 100 for some examples) compared to optimized (classical) Gr\"obner bases software. Moreover, in the case where the generating subset of monomials corresponds to the points with integer coordinates in a normal lattice polytope PRn\mathcal P\subset\mathbb R^n and under regularity assumptions, we prove complexity bounds which depend on the combinatorial properties of P\mathcal P. These bounds yield new estimates on the complexity of solving 00-dim systems where all polynomials share the same Newton polytope (\emph{unmixed case}). For instance, we generalize the bound min(n1,n2)+1\min(n_1,n_2)+1 on the maximal degree in a Gr\"obner basis of a 00-dim. bilinear system with blocks of variables of sizes (n1,n2)(n_1,n_2) to the multilinear case: nimax(ni)+1\sum n_i - \max(n_i)+1. We also propose a variant of Fr\"oberg's conjecture which allows us to estimate the complexity of solving overdetermined sparse systems.Comment: 20 pages, Corollary 6.1 has been corrected, ISSAC 2014, Kobe : Japan (2014

    Numerical Instability of Resultant Methods for Multidimensional Rootfinding

    Get PDF
    Hidden-variable resultant methods are a class of algorithms for solving multidimensional polynomial rootfinding problems. In two dimensions, when significant care is taken, they are competitive practical rootfinders. However, in higher dimensions they are known to miss zeros, calculate roots to low precision, and introduce spurious solutions. We show that the hidden variable resultant method based on the Cayley (Dixon or Bézout) matrix is inherently and spectacularly numerically unstable by a factor that grows exponentially with the dimension. We also show that the Sylvester matrix for solving bivariate polynomial systems can square the condition number of the problem. In other words, two popular hidden variable resultant methods are numerically unstable, and this mathematically explains the difficulties that are frequently reported by practitioners. Regardless of how the constructed polynomial eigenvalue problem is solved, severe numerical difficulties will be present. Along the way, we prove that the Cayley resultant is a generalization of Cramer's rule for solving linear systems and generalize Clenshaw's algorithm to an evaluation scheme for polynomials expressed in a degree-graded polynomial basis
    corecore