133 research outputs found

    A non-linear structure-preserving matrix method for the computation of the coefficients of an approximate greatest common divisor of two Bernstein polynomials

    Get PDF
    This paper describes a non-linear structure-preserving ma trix method for the com- putation of the coefficients of an approximate greatest commo n divisor (AGCD) of degree t of two Bernstein polynomials f ( y ) and g ( y ). This method is applied to a modified form S t ( f, g ) Q t of the t th subresultant matrix S t ( f, g ) of the Sylvester resultant matrix S ( f, g ) of f ( y ) and g ( y ), where Q t is a diagonal matrix of com- binatorial terms. This modified subresultant matrix has sig nificant computational advantages with respect to the standard subresultant matri x S t ( f, g ), and it yields better results for AGCD computations. It is shown that f ( y ) and g ( y ) must be pro- cessed by three operations before S t ( f, g ) Q t is formed, and the consequence of these operations is the introduction of two parameters, α and θ , such that the entries of S t ( f, g ) Q t are non-linear functions of α, θ and the coefficients of f ( y ) and g ( y ). The values of α and θ are optimised, and it is shown that these optimal values allo w an AGCD that has a small error, and a structured low rank approxi mation of S ( f, g ), to be computed

    A non-linear structure preserving matrix method for the low rank approximation of the Sylvester resultant matrix

    Get PDF
    A non-linear structure preserving matrix method for the computation of a structured low rank approximation S((f) over bar , (g) over bar) of the Sylvester resultant matrix S(f , g) of two inexact polynomials f = f(y) and g = g(y) is considered in this paper. It is shown that considerably improved results are obtained when f (y) and g(y) are processed prior to the computation of S((f) over bar , (g) over bar), and that these preprocessing operations introduce two parameters. These parameters can either be held constant during the computation of S((f) over bar , (g) over bar), which leads to a linear structure preserving matrix method, or they can be incremented during the computation of S((f) over bar, (g) over bar), which leads to a non-linear structure preserving matrix method. It is shown that the non-linear method yields a better structured low rank approximation of S((f) over bar , (g) over bar) and that the assignment of f (y) and g(y) is important because S((f) over bar , (g) over bar) may be a good structured low rank approximation of S(f, g), but S((f) over bar , (g) over bar) may be a poor structured low rank approximation of S (g f) because its numerical rank is not defined. Examples that illustrate the differences between the linear and non-linear structure preserving matrix methods, and the importance of the assignment off (y) and g(y), are shown. (C) 2010 Elsevier B.V. All rights reserved

    The computation of multiple roots of a Bernstein basis polynomial

    Get PDF
    This paper describes the algorithms of Musser and Gauss for the computation of multiple roots of a theoretically exact Bernstein basis polynomial ˆ 5 f(y) when the coefficients of its given form f(y) are corrupted by noise. The exact roots of f(y) can therefore be assumed to be simple, and thus the problem reduces to the calculation of multiple roots of a polynomial f˜(y) that is near f(y), such that the backward error is small. The algorithms require many greatest common divisor (GCD) computations and polynomial deconvolutions, both of which are implemented by a structure-preserving matrix method. The motivation of these algorithms arises from the unstructured and structured condition numbers of a multiple root of a polynomial. These condition numbers have an elegant interpretation in terms of the pejorative manifold of ˆ 12 f(y), which allows the geometric significance of the GCD computations and polynomial deconvolutions to be considered. A variant of the Sylvester resultant matrix is used for the GCD computations because it yields better results than the standard form of this matrix, and the polynomial deconvolutions can be computed in several different ways, sequentially or simultaneously, and with the inclusion or omission of the preservation of the structure of the coefficient matrix. It is shown that Gauss’ algorithm yields better results than Musser’s algorithm, and the reason for these superior results is explained

    The computation of the degree of an approximate greatest common divisor of two Bernstein polynomials

    Get PDF
    This paper considers the computation of the degree t of an approximate greatest common divisor d(y) of two Bernstein polynomials f(y) and g(y), which are of degrees m and n respectively. The value of t is computed from the QR decomposition of the Sylvester resultant matrix S(f, g) and its subresultant matrices Sk(f, g), k = 2, . . . , min(m, n), where S1(f, g) = S(f, g). It is shown that the computation of t is significantly more complicated than its equivalent for two power basis polynomials because (a) Sk(f, g) can be written in several forms that differ in the complexity of the computation of their entries, (b) different forms of Sk(f, g) may yield different values of t, and (c) the binomial terms in the entries of Sk(f, g) may cause the ratio of its entry of maximum magnitude to its entry of minimum magnitude to be large, which may lead to numerical problems. It is shown that the QR decomposition and singular value decomposition (SVD) of the Sylvester matrix and its subresultant matrices yield better results than the SVD of the B´ezout matrix, and that f(y) and g(y) must be processed before computations are performed on these resultant and subresultant matrices in order to obtain good results

    Over-constrained Weierstrass iteration and the nearest consistent system

    Full text link
    We propose a generalization of the Weierstrass iteration for over-constrained systems of equations and we prove that the proposed method is the Gauss-Newton iteration to find the nearest system which has at least kk common roots and which is obtained via a perturbation of prescribed structure. In the univariate case we show the connection of our method to the optimization problem formulated by Karmarkar and Lakshman for the nearest GCD. In the multivariate case we generalize the expressions of Karmarkar and Lakshman, and give explicitly several iteration functions to compute the optimum. The arithmetic complexity of the iterations is detailed

    The Sylvester and BĂ©zout resultant matrices for blind image deconvolution

    Get PDF
    Blind image deconvolution (BID) is one of the most important problems in image processing and it requires the determination of an exact image F from a degraded form of it G when little or no information about F and the point spread function (PSF) H is known. Several methods have been developed for the solution of this problem, and one class of methods considers F; G and H to be bivariate polynomials in which the polynomial computations are implemented by the Sylvester or B ezout resultant matrices. This paper compares these matrices for the solution of the problem of BID, and it is shown that it reduces to a comparison of their e ectiveness for greatest common divisor (GCD) computations. This is a di cult problem because the determination of the degree of the GCD of two polynomials requires the calculation of the rank of a matrix, and this rank determines the size of the PSF. It is shown that although the B ezout matrix is symmetric (unlike the Sylvester matrix) and it is smaller than the Sylvester matrix, which have computational advantages, it yields consistently worse results than the Sylvester matrix for the size and coe cients of the PSF. Computational examples of blurred and deblurred images obtained with the Sylvester and B ezout matrices are shown and the superior results obtained with the Sylvester matrix are evident

    TR-2013009: Algebraic Algorithms

    Full text link

    Structure-Preserving Matrix Methods for Computations on Univariate and Bivariate Bernstein Polynomials

    Get PDF
    Curve and surface intersection finding is a fundamental problem in computer-aided geometric design (CAGD). This practical problem motivates the undertaken study into methods for computing the square-free factorisation of univariate and bivariate polynomials in Bernstein form. It will be shown how these two problems are intrinsically linked and how finding univariate polynomial roots and bivariate polynomial factors is equivalent to finding curve and surface intersection points. The multiplicities of a polynomial’s factors are maintained through the use of a square free factorisation algorithm and this is analogous to the maintenance of smooth intersections between curves and surfaces, an important property in curve and surface design. Several aspects of the univariate and bivariate polynomial factorisation problem will be considered. This thesis examines the structure of the greatest common divisor (GCD) problem within the context of the square-free factorisation problem. It is shown that an accurate approximation of the GCD can be computed from inexact polynomials even in the presence of significant levels of noise. Polynomial GCD computations are ill-posed, in that noise in the coefficients of two polynomials which have a common factor typically causes the polynomials to become coprime. Therefore, a method for determining the approximate greatest common divisor (AGCD) is developed, where the AGCD is defined to have the same degree as the GCD and its coefficients are sufficiently close to those of the exact GCD. The algorithms proposed assume no prior knowledge of the level of noise added to the exact polynomials, differentiating this method from others which require derived threshold values in the GCD computation. The methods of polynomial factorisation devised in this thesis utilise the Sylvester matrix and a sequence of subresultant matrices for the GCD finding component. The classical definition of the Sylvester matrix is extended to compute the GCD of two and three bivariate polynomials defined in Bernstein form, and a new method of GCD computation is devised specifically for bivariate polynomials in Bernstein form which have been defined over a rectangular domain. These extensions are necessary for the computation of the factorisation of bivariate polynomials defined in the Bernstein form
    • …
    corecore