31 research outputs found

    The computation of multiple roots of a Bernstein basis polynomial

    Get PDF
    This paper describes the algorithms of Musser and Gauss for the computation of multiple roots of a theoretically exact Bernstein basis polynomial ˆ 5 f(y) when the coefficients of its given form f(y) are corrupted by noise. The exact roots of f(y) can therefore be assumed to be simple, and thus the problem reduces to the calculation of multiple roots of a polynomial f˜(y) that is near f(y), such that the backward error is small. The algorithms require many greatest common divisor (GCD) computations and polynomial deconvolutions, both of which are implemented by a structure-preserving matrix method. The motivation of these algorithms arises from the unstructured and structured condition numbers of a multiple root of a polynomial. These condition numbers have an elegant interpretation in terms of the pejorative manifold of ˆ 12 f(y), which allows the geometric significance of the GCD computations and polynomial deconvolutions to be considered. A variant of the Sylvester resultant matrix is used for the GCD computations because it yields better results than the standard form of this matrix, and the polynomial deconvolutions can be computed in several different ways, sequentially or simultaneously, and with the inclusion or omission of the preservation of the structure of the coefficient matrix. It is shown that Gauss’ algorithm yields better results than Musser’s algorithm, and the reason for these superior results is explained

    The computation of the degree of an approximate greatest common divisor of two Bernstein polynomials

    Get PDF
    This paper considers the computation of the degree t of an approximate greatest common divisor d(y) of two Bernstein polynomials f(y) and g(y), which are of degrees m and n respectively. The value of t is computed from the QR decomposition of the Sylvester resultant matrix S(f, g) and its subresultant matrices Sk(f, g), k = 2, . . . , min(m, n), where S1(f, g) = S(f, g). It is shown that the computation of t is significantly more complicated than its equivalent for two power basis polynomials because (a) Sk(f, g) can be written in several forms that differ in the complexity of the computation of their entries, (b) different forms of Sk(f, g) may yield different values of t, and (c) the binomial terms in the entries of Sk(f, g) may cause the ratio of its entry of maximum magnitude to its entry of minimum magnitude to be large, which may lead to numerical problems. It is shown that the QR decomposition and singular value decomposition (SVD) of the Sylvester matrix and its subresultant matrices yield better results than the SVD of the B´ezout matrix, and that f(y) and g(y) must be processed before computations are performed on these resultant and subresultant matrices in order to obtain good results

    A non-linear structure-preserving matrix method for the computation of the coefficients of an approximate greatest common divisor of two Bernstein polynomials

    Get PDF
    This paper describes a non-linear structure-preserving ma trix method for the com- putation of the coefficients of an approximate greatest commo n divisor (AGCD) of degree t of two Bernstein polynomials f ( y ) and g ( y ). This method is applied to a modified form S t ( f, g ) Q t of the t th subresultant matrix S t ( f, g ) of the Sylvester resultant matrix S ( f, g ) of f ( y ) and g ( y ), where Q t is a diagonal matrix of com- binatorial terms. This modified subresultant matrix has sig nificant computational advantages with respect to the standard subresultant matri x S t ( f, g ), and it yields better results for AGCD computations. It is shown that f ( y ) and g ( y ) must be pro- cessed by three operations before S t ( f, g ) Q t is formed, and the consequence of these operations is the introduction of two parameters, α and θ , such that the entries of S t ( f, g ) Q t are non-linear functions of α, θ and the coefficients of f ( y ) and g ( y ). The values of α and θ are optimised, and it is shown that these optimal values allo w an AGCD that has a small error, and a structured low rank approxi mation of S ( f, g ), to be computed

    An approximate factorisation of three bivariate Bernstein basis polynomials defined in a triangular domain

    Get PDF
    This paper considers an approximate factorisation of three bivariate Bernstein basis polynomials that are defined in a triangular domain. This problem is important for the computation of the intersection points of curves in computer-aided design systems, and it reduces to the determination of an approximate greatest common divisor (AGCD) d (y) of the polynomials. The Sylvester matrix and its subresultant matrices of these three polynomials are formed and it is shown that there are four forms of these matrices. The most difficult part of the computation is the determination of the degree of d (y) because it reduces to the determination of the rank loss of these matrices. This computation is made harder by the presence of trinomial terms in the Bernstein basis functions because they cause the entries of the matrices to span many orders of magnitude. The adverse numerical effects of this wide range of magnitudes of the entries of the four forms of the Sylvester matrix and its subresultant matrices are mitigated by processing the polynomials before these matrices are formed. It is shown that significantly improved results are obtained if the polynomials are processed before computations are performed on their Sylvester matrices and subresultant matrices

    Over-constrained Weierstrass iteration and the nearest consistent system

    Full text link
    We propose a generalization of the Weierstrass iteration for over-constrained systems of equations and we prove that the proposed method is the Gauss-Newton iteration to find the nearest system which has at least kk common roots and which is obtained via a perturbation of prescribed structure. In the univariate case we show the connection of our method to the optimization problem formulated by Karmarkar and Lakshman for the nearest GCD. In the multivariate case we generalize the expressions of Karmarkar and Lakshman, and give explicitly several iteration functions to compute the optimum. The arithmetic complexity of the iterations is detailed

    Statistical and structured optimisation : methods for the approximate GCD problem.

    Get PDF
    The computation of polynomial greatest common divisors (GCDs) is a fundamental problem in algebraic computing and has important widespread applications in areas such as computing theory, control, image processing, signal processing and computer-aided design (CAD)

    Structure-Preserving Matrix Methods for Computations on Univariate and Bivariate Bernstein Polynomials

    Get PDF
    Curve and surface intersection finding is a fundamental problem in computer-aided geometric design (CAGD). This practical problem motivates the undertaken study into methods for computing the square-free factorisation of univariate and bivariate polynomials in Bernstein form. It will be shown how these two problems are intrinsically linked and how finding univariate polynomial roots and bivariate polynomial factors is equivalent to finding curve and surface intersection points. The multiplicities of a polynomial’s factors are maintained through the use of a square free factorisation algorithm and this is analogous to the maintenance of smooth intersections between curves and surfaces, an important property in curve and surface design. Several aspects of the univariate and bivariate polynomial factorisation problem will be considered. This thesis examines the structure of the greatest common divisor (GCD) problem within the context of the square-free factorisation problem. It is shown that an accurate approximation of the GCD can be computed from inexact polynomials even in the presence of significant levels of noise. Polynomial GCD computations are ill-posed, in that noise in the coefficients of two polynomials which have a common factor typically causes the polynomials to become coprime. Therefore, a method for determining the approximate greatest common divisor (AGCD) is developed, where the AGCD is defined to have the same degree as the GCD and its coefficients are sufficiently close to those of the exact GCD. The algorithms proposed assume no prior knowledge of the level of noise added to the exact polynomials, differentiating this method from others which require derived threshold values in the GCD computation. The methods of polynomial factorisation devised in this thesis utilise the Sylvester matrix and a sequence of subresultant matrices for the GCD finding component. The classical definition of the Sylvester matrix is extended to compute the GCD of two and three bivariate polynomials defined in Bernstein form, and a new method of GCD computation is devised specifically for bivariate polynomials in Bernstein form which have been defined over a rectangular domain. These extensions are necessary for the computation of the factorisation of bivariate polynomials defined in the Bernstein form

    The computation of multiple roots of a polynomial using structure preserving matrix methods.

    Get PDF
    Solving polynomial equations is a fundamental problem in several engineering and science fields. This problem has been handled by several researchers and excellent algorithms have been proposed for solving this problem. The computation of the roots of ill-conditioned polynomials is, however, still drawing the attention of several researchers. In particular, a small round off error due to floating point arithmetic is sufficient to break up a multiple root of a polynomial into a cluster of simple closely spaced roots. The problem becomes more complicated if the neighbouring roots are closely spaced. This thesis develops a root finder to compute multiple roots of an inexact polynomial whose coefficients are corrupted by noise. The theoretical development of the developed root solver involves the use of structured matrix methods, optimising parameters using linear programming, and solving least squares equality and nonlinear least squares problems. The developed root solver differs from the classical methods, because it first computes the multiplicities of the roots, after which the roots are computed. The experimental results show that the developed root solver gives very good results without the need for prior knowledge about the noise level imposed on the coefficients of the polynomial
    corecore