9 research outputs found

    Matrix representation of the shifting operation and numerical properties of the ERES method for computing the greatest common divisor of sets of many polynomials

    Get PDF
    The Extended-Row-Equivalence and Shifting (ERES) method is a matrix-based method developed for the computation of the greatest common divisor (GCD) of sets of many polynomials. In this paper we present the formulation of the shifting operation as a matrix product which allows us to study the fundamental theoretical and numerical properties of the ERES method by introducing its complete algebraic representation. Then, we analyse in depth its overall numerical stability in finite precision arithmetic. Numerical examples and comparison with other methods are also presented

    Structured Total Least Squares for Approximate Polynomial Operations

    Get PDF
    This thesis presents techniques for accurately computing a number of fundamental operations on approximate polynomials. The general goal is to determine nearby polynomials which have a non-trivial result for the operation. We proceed by first translating each of the polynomial operations to a particular structured matrix system, constructed to represent dependencies in the polynomial coefficients. Perturbing this matrix system to a nearby system of reduced rank yields the nearby polynomials that have a non-trivial result. The translation from polynomial operation to matrix system permits the use of emerging methods for solving sophisticated least squares problems. These methods introduce the required dependencies in the system in a structured way, ensuring a certain minimization is met. This minimization ensures the determined polynomials are close to the original input. We present translations for the following operations on approximate polynomials: Division Greatest Common Divisor (GCD) Bivariate Factorization Decomposition The Least Squares problems considered include classical Least Squares (LS), Total Least Squares (TLS) and Structured Total Least Squares (STLS). In particular, we make use of some recent developments in formulation of STLS, to perturb the matrix system, while maintaining the structure of the original matrix. This allows reconstruction of the resulting polynomials without applying any heuristics or iterative refinements, and guarantees a result for the operation with zero residual. Underlying the methods for the LS, TLS and STLS problems are varying uses of the Singular Value Decomposition (SVD). This decomposition is also a vital tool for deter- mining appropriate matrix rank, and we spend some time establishing the accuracy of the SVD. We present an algorithm for relatively accurate SVD recently introduced in [8], then used to solve LS and TLS problems. The result is confidence in the use of LS and TLS for the polynomial operations, to provide a fair contrast with STLS. The SVD is also used to provide the starting point for our STLS algorithm, with the prescribed guaranteed accuracy. Finally, we present a generalized implementation of the Riemannian SVD (RiSVD), which can be applied on any structured matrix to determine the result for STLS. This has the advantage of being applicable to all of our polynomial operations, with the penalty of decreased efficiency. We also include a novel, yet naive, improvement that relies on ran- domization to increase the efficiency, by converting a rectangular system to one that is square. The results for each of the polynomial operations are presented in detail, and the benefits of each of the Least Squares solutions are considered. We also present distance bounds that confirm our solutions are within an acceptable tolerance

    Computing Approximate GCRDs of Differential Polynomials

    Get PDF
    We generalize the approximate greatest common divisor problem to the non-commutative, approximate Greatest Common Right Divisor (GCRD) problem of differential polynomials. Algorithms for performing arithmetic on approximate differential polynomials are presented along with certification results and the corresponding number of flops required. Under reasonable assumptions the approximate GCRD problem is well posed. In particular, we show that an approximate GCRD exists under these assumptions and provide counter examples when these assumptions are not satisfied. We introduce algorithms for computing nearby differential polynomials with a GCRD. These differential polynomials are improved through a post-refinement Newton iteration. It is shown that Newton iteration will converge to a unique, optimal solution when the residual is sufficiently small. Furthermore, if our computed solution is not optimal, it is shown that this solution is reasonably close to the optimal solution

    Blind Image Deconvolution Using The Sylvester Matrix

    Get PDF
    Blind image deconvolution refers to the process of determining both an exact image and the blurring function from its inexact image. This thesis presents a solution of the blind image deconvolution problem us- ing polynomial computations. The proposed solution does not require prior knowledge of the blurring function or noise level. Blind image deconvolution is needed in many applications, such as astronomy, re- mote sensing and medical X-ray, where noise is present in the exact image and blurring function. It is shown that the Sylvester resultant matrix enables the blurring function to be calculated using approx- imate greatest common divisor computations, rather than greatest common divisor computations. A developed method for the com- putation of an approximate greatest common divisor of two inexact univariate polynomials is employed here, to identify arbitrary forms of the blurring function. The deblurred image is then calculated by de- convolving the computed blurring function from the degraded image, using polynomial division. Moreover, high performance computing is considered to speed up the calculation performed in the spatial do- main. The effectiveness of the proposed solution is demonstrated by experimental results for the deblurred image and the blurring func- tion, and the results are compared with the state-of-the-art image deblurring algorithm

    Explicit computations with modular Galois representations

    Get PDF
    UBL - phd migration 201

    Structure-Preserving Matrix Methods for Computations on Univariate and Bivariate Bernstein Polynomials

    Get PDF
    Curve and surface intersection finding is a fundamental problem in computer-aided geometric design (CAGD). This practical problem motivates the undertaken study into methods for computing the square-free factorisation of univariate and bivariate polynomials in Bernstein form. It will be shown how these two problems are intrinsically linked and how finding univariate polynomial roots and bivariate polynomial factors is equivalent to finding curve and surface intersection points. The multiplicities of a polynomial’s factors are maintained through the use of a square free factorisation algorithm and this is analogous to the maintenance of smooth intersections between curves and surfaces, an important property in curve and surface design. Several aspects of the univariate and bivariate polynomial factorisation problem will be considered. This thesis examines the structure of the greatest common divisor (GCD) problem within the context of the square-free factorisation problem. It is shown that an accurate approximation of the GCD can be computed from inexact polynomials even in the presence of significant levels of noise. Polynomial GCD computations are ill-posed, in that noise in the coefficients of two polynomials which have a common factor typically causes the polynomials to become coprime. Therefore, a method for determining the approximate greatest common divisor (AGCD) is developed, where the AGCD is defined to have the same degree as the GCD and its coefficients are sufficiently close to those of the exact GCD. The algorithms proposed assume no prior knowledge of the level of noise added to the exact polynomials, differentiating this method from others which require derived threshold values in the GCD computation. The methods of polynomial factorisation devised in this thesis utilise the Sylvester matrix and a sequence of subresultant matrices for the GCD finding component. The classical definition of the Sylvester matrix is extended to compute the GCD of two and three bivariate polynomials defined in Bernstein form, and a new method of GCD computation is devised specifically for bivariate polynomials in Bernstein form which have been defined over a rectangular domain. These extensions are necessary for the computation of the factorisation of bivariate polynomials defined in the Bernstein form
    corecore