39,555 research outputs found

    Computing GCRDs of Approximate Differential Polynomials

    Full text link
    Differential (Ore) type polynomials with approximate polynomial coefficients are introduced. These provide a useful representation of approximate differential operators with a strong algebraic structure, which has been used successfully in the exact, symbolic, setting. We then present an algorithm for the approximate Greatest Common Right Divisor (GCRD) of two approximate differential polynomials, which intuitively is the differential operator whose solutions are those common to the two inputs operators. More formally, given approximate differential polynomials ff and gg, we show how to find "nearby" polynomials f~\widetilde f and g~\widetilde g which have a non-trivial GCRD. Here "nearby" is under a suitably defined norm. The algorithm is a generalization of the SVD-based method of Corless et al. (1995) for the approximate GCD of regular polynomials. We work on an appropriately "linearized" differential Sylvester matrix, to which we apply a block SVD. The algorithm has been implemented in Maple and a demonstration of its robustness is presented.Comment: To appear, Workshop on Symbolic-Numeric Computing (SNC'14) July 201

    Over-constrained Weierstrass iteration and the nearest consistent system

    Full text link
    We propose a generalization of the Weierstrass iteration for over-constrained systems of equations and we prove that the proposed method is the Gauss-Newton iteration to find the nearest system which has at least kk common roots and which is obtained via a perturbation of prescribed structure. In the univariate case we show the connection of our method to the optimization problem formulated by Karmarkar and Lakshman for the nearest GCD. In the multivariate case we generalize the expressions of Karmarkar and Lakshman, and give explicitly several iteration functions to compute the optimum. The arithmetic complexity of the iterations is detailed

    Sparse implicitization by interpolation: Characterizing non-exactness and an application to computing discriminants

    Get PDF
    We revisit implicitization by interpolation in order to examine its properties in the context of sparse elimination theory. Based on the computation of a superset of the implicit support, implicitization is reduced to computing the nullspace of a numeric matrix. The approach is applicable to polynomial and rational parameterizations of curves and (hyper)surfaces of any dimension, including the case of parameterizations with base points. Our support prediction is based on sparse (or toric) resultant theory, in order to exploit the sparsity of the input and the output. Our method may yield a multiple of the implicit equation: we characterize and quantify this situation by relating the nullspace dimension to the predicted support and its geometry. In this case, we obtain more than one multiples of the implicit equation; the latter can be obtained via multivariate polynomial gcd (or factoring). All of the above techniques extend to the case of approximate computation, thus yielding a method of sparse approximate implicitization, which is important in tackling larger problems. We discuss our publicly available Maple implementation through several examples, including the benchmark of bicubic surface. For a novel application, we focus on computing the discriminant of a multivariate polynomial, which characterizes the existence of multiple roots and generalizes the resultant of a polynomial system. This yields an efficient, output-sensitive algorithm for computing the discriminant polynomial
    corecore