234 research outputs found

    An Output-sensitive Algorithm for Computing Projections of Resultant Polytopes

    Get PDF
    We develop an incremental algorithm to compute the Newton polytope of the resultant, aka resultant polytope, or its projection along a given direction. The resultant is fundamental in algebraic elimination and in implicitization of parametric hypersurfaces. Our algorithm exactly computes vertex- and halfspace-representations of the desired polytope using an oracle producing resultant vertices in a given direction. It is output-sensitive as it uses one oracle call per vertex. We overcome the bottleneck of determinantal predicates by hashing, thus accelerating execution from 1818 to 100100 times. We implement our algorithm using the experimental CGAL package {\tt triangulation}. A variant of the algorithm computes successively tighter inner and outer approximations: when these polytopes have, respectively, 90\% and 105\% of the true volume, runtime is reduced up to 2525 times. Our method computes instances of 55-, 66- or 77-dimensional polytopes with 3535K, 2323K or 500500 vertices, resp., within 22hr. Compared to tropical geometry software, ours is faster up to dimension 55 or 66, and competitive in higher dimensions

    A software framework for computing Newton polytopes of resultants and (reduced) discriminants

    Get PDF
    We present a new software for computing Newton polytopes of resultant and discriminant polynomials. We illustrate its use with a number of examples

    Implicitization of curves and (hyper)surfaces using predicted support

    Get PDF
    We reduce implicitization of rational planar parametric curves and (hyper)surfaces to linear algebra, by interpolating the coefficients of the implicit equation. For predicting the implicit support, we focus on methods that exploit input and output structure in the sense of sparse (or toric) elimination theory, namely by computing the Newton polytope of the implicit polynomial, via sparse resultant theory. Our algorithm works even in the presence of base points but, in this case, the implicit equation shall be obtained as a factor of the produced polynomial. We implement our methods on Maple, and some on Matlab as well, and study their numerical stability and efficiency on several classes of curves and surfaces. We apply our approach to approximate implicitization, and quantify the accuracy of the approximate output, which turns out to be satisfactory on all tested examples; we also relate our measures to Hausdorff distance. In building a square or rectangular matrix, an important issue is (over)sampling the given curve or surface: we conclude that unitary complexes offer the best tradeoff between speed and accuracy when numerical methods are employed, namely SVD, whereas for exact kernel computation random integers is the method of choice. We compare our prototype to existing software and find that it is rather competitive

    Enumerating a subset of the integer points inside a Minkowski sum

    Get PDF
    AbstractSparse elimination exploits the structure of algebraic equations in order to obtain tighter bounds on the number of roots and better complexity in numerically approximating them. The model of sparsity is of combinatorial nature, thus leading to certain problems in general-dimensional convex geometry. This work addresses one such problem, namely the computation of a certain subset of integer points in the interior of integer convex polytopes. These polytopes are Minkowski sums, but avoiding their explicit construction is precisely one of the main features of the algorithm. Complexity bounds for our algorithm are derived under certain hypotheses, in terms of output-size and the sparsity parameters. A public domain implementation is described and its performance studied. Linear optimization lies at the inner loop of the algorithm, hence we analyze the structure of the linear programs and compare different implementations

    Sparse implicitization by interpolation: Characterizing non-exactness and an application to computing discriminants

    Get PDF
    We revisit implicitization by interpolation in order to examine its properties in the context of sparse elimination theory. Based on the computation of a superset of the implicit support, implicitization is reduced to computing the nullspace of a numeric matrix. The approach is applicable to polynomial and rational parameterizations of curves and (hyper)surfaces of any dimension, including the case of parameterizations with base points. Our support prediction is based on sparse (or toric) resultant theory, in order to exploit the sparsity of the input and the output. Our method may yield a multiple of the implicit equation: we characterize and quantify this situation by relating the nullspace dimension to the predicted support and its geometry. In this case, we obtain more than one multiples of the implicit equation; the latter can be obtained via multivariate polynomial gcd (or factoring). All of the above techniques extend to the case of approximate computation, thus yielding a method of sparse approximate implicitization, which is important in tackling larger problems. We discuss our publicly available Maple implementation through several examples, including the benchmark of bicubic surface. For a novel application, we focus on computing the discriminant of a multivariate polynomial, which characterizes the existence of multiple roots and generalizes the resultant of a polynomial system. This yields an efficient, output-sensitive algorithm for computing the discriminant polynomial

    Faster Geometric Algorithms via Dynamic Determinant Computation

    Full text link
    The computation of determinants or their signs is the core procedure in many important geometric algorithms, such as convex hull, volume and point location. As the dimension of the computation space grows, a higher percentage of the total computation time is consumed by these computations. In this paper we study the sequences of determinants that appear in geometric algorithms. The computation of a single determinant is accelerated by using the information from the previous computations in that sequence. We propose two dynamic determinant algorithms with quadratic arithmetic complexity when employed in convex hull and volume computations, and with linear arithmetic complexity when used in point location problems. We implement the proposed algorithms and perform an extensive experimental analysis. On one hand, our analysis serves as a performance study of state-of-the-art determinant algorithms and implementations. On the other hand, we demonstrate the supremacy of our methods over state-of-the-art implementations of determinant and geometric algorithms. Our experimental results include a 20 and 78 times speed-up in volume and point location computations in dimension 6 and 11 respectively.Comment: 29 pages, 8 figures, 3 table
    corecore