194 research outputs found

    The computation of multiple roots of a Bernstein basis polynomial

    Get PDF
    This paper describes the algorithms of Musser and Gauss for the computation of multiple roots of a theoretically exact Bernstein basis polynomial ˆ 5 f(y) when the coefficients of its given form f(y) are corrupted by noise. The exact roots of f(y) can therefore be assumed to be simple, and thus the problem reduces to the calculation of multiple roots of a polynomial f˜(y) that is near f(y), such that the backward error is small. The algorithms require many greatest common divisor (GCD) computations and polynomial deconvolutions, both of which are implemented by a structure-preserving matrix method. The motivation of these algorithms arises from the unstructured and structured condition numbers of a multiple root of a polynomial. These condition numbers have an elegant interpretation in terms of the pejorative manifold of ˆ 12 f(y), which allows the geometric significance of the GCD computations and polynomial deconvolutions to be considered. A variant of the Sylvester resultant matrix is used for the GCD computations because it yields better results than the standard form of this matrix, and the polynomial deconvolutions can be computed in several different ways, sequentially or simultaneously, and with the inclusion or omission of the preservation of the structure of the coefficient matrix. It is shown that Gauss’ algorithm yields better results than Musser’s algorithm, and the reason for these superior results is explained

    The Sylvester and Bézout resultant matrices for blind image deconvolution

    Get PDF
    Blind image deconvolution (BID) is one of the most important problems in image processing and it requires the determination of an exact image F from a degraded form of it G when little or no information about F and the point spread function (PSF) H is known. Several methods have been developed for the solution of this problem, and one class of methods considers F; G and H to be bivariate polynomials in which the polynomial computations are implemented by the Sylvester or B ezout resultant matrices. This paper compares these matrices for the solution of the problem of BID, and it is shown that it reduces to a comparison of their e ectiveness for greatest common divisor (GCD) computations. This is a di cult problem because the determination of the degree of the GCD of two polynomials requires the calculation of the rank of a matrix, and this rank determines the size of the PSF. It is shown that although the B ezout matrix is symmetric (unlike the Sylvester matrix) and it is smaller than the Sylvester matrix, which have computational advantages, it yields consistently worse results than the Sylvester matrix for the size and coe cients of the PSF. Computational examples of blurred and deblurred images obtained with the Sylvester and B ezout matrices are shown and the superior results obtained with the Sylvester matrix are evident

    Structure-Preserving Matrix Methods for Computations on Univariate and Bivariate Bernstein Polynomials

    Get PDF
    Curve and surface intersection finding is a fundamental problem in computer-aided geometric design (CAGD). This practical problem motivates the undertaken study into methods for computing the square-free factorisation of univariate and bivariate polynomials in Bernstein form. It will be shown how these two problems are intrinsically linked and how finding univariate polynomial roots and bivariate polynomial factors is equivalent to finding curve and surface intersection points. The multiplicities of a polynomial’s factors are maintained through the use of a square free factorisation algorithm and this is analogous to the maintenance of smooth intersections between curves and surfaces, an important property in curve and surface design. Several aspects of the univariate and bivariate polynomial factorisation problem will be considered. This thesis examines the structure of the greatest common divisor (GCD) problem within the context of the square-free factorisation problem. It is shown that an accurate approximation of the GCD can be computed from inexact polynomials even in the presence of significant levels of noise. Polynomial GCD computations are ill-posed, in that noise in the coefficients of two polynomials which have a common factor typically causes the polynomials to become coprime. Therefore, a method for determining the approximate greatest common divisor (AGCD) is developed, where the AGCD is defined to have the same degree as the GCD and its coefficients are sufficiently close to those of the exact GCD. The algorithms proposed assume no prior knowledge of the level of noise added to the exact polynomials, differentiating this method from others which require derived threshold values in the GCD computation. The methods of polynomial factorisation devised in this thesis utilise the Sylvester matrix and a sequence of subresultant matrices for the GCD finding component. The classical definition of the Sylvester matrix is extended to compute the GCD of two and three bivariate polynomials defined in Bernstein form, and a new method of GCD computation is devised specifically for bivariate polynomials in Bernstein form which have been defined over a rectangular domain. These extensions are necessary for the computation of the factorisation of bivariate polynomials defined in the Bernstein form

    Blind Image Deconvolution using Approximate Greatest Common Divisor and Approximate Polynomial Factorisation

    Get PDF
    Images play a significant and important role in diverse areas of everyday modern life. Examples of the areas where the use of images is routine include medicine, forensic investigations, engineering applications and astronomical science. The procedures and methods that depend on image processing would benefit considerably from images that are free of blur. Most images are unfortunately affected by noise and blur that result from the practical limitations of image sourcing systems. The blurring and noise effects render the image less useful. An efficient method for image restoration is hence important for many applications. Restoration of true images from blurred images is the inverse of the naturally occurring problem of true image convolution through a blurring function. The deconvolution of images from blurred images is a non-trivial task. One challenge is that the computation of the mathematical function that represents the blurring process, which is known as the point spread function (PSF), is an ill-posed problem, i.e. an infinite number of solutions are possible for given inexact data. The blind image deconvolution (BID) problem is the central subject of this thesis. There are a number of approaches for solving the BID problem, including statistical methods and linear algebraic methods. The approach adopted in this research study for solving this problem falls within the class of linear algebraic methods. Polynomial linear algebra offers a way of computing the PSF size and its components without requiring any prior knowledge about the true image and the blurring PSF. This research study has developed a BID method for image restoration based on the approximate greatest common divisor (AGCD) algorithms, specifically, the approximate polynomial factorization (APF) algorithm of two polynomials. The developed method uses the Sylvester resultant matrix algorithm in the computation of the AGCD and the QR decomposition for computing the degree of the AGCD. It is shown that the AGCD is equal to the PSF and the deblurred image can be computed from the coprime polynomials. In practice, the PSF can be spatially variant or invariant. PSF spatial invariance means that the blurred image pixels are the convolution of the true image pixels and the same PSF. Some of the PSF bivariate functions, in particular, separable functions, can be further simplified as the multiplication of two univariate polynomials. This research study is focused on the invariant separable and non-separable PSF cases. The performance of state-of-the-art image restoration methods varies in terms of computational speed and accuracy. In addition, most of these methods require prior knowledge about the true image and the blurring function, which in a significant number of applications is an impractical requirement. The development of image restoration methods that require no prior knowledge about the true image and the blurring functions is hence desirable. Previous attempts at developing BID methods resulted in methods that have a robust performance against noise perturbations; however, their good performance is limited to blurring functions of small size. In addition, even for blurring functions of small size, these methods require the size of the blurring functions to be known and an estimate of the noise level to be present in the blurred image. The developed method has better performance than all the other state-of-the-art methods, in particular, it determines the correct size and coefficients of the PSF and then uses it to recover the original image. It does not require any prior knowledge about the PSF, which is a prerequisite for all the other methods

    Blind Image Deconvolution Using The Sylvester Matrix

    Get PDF
    Blind image deconvolution refers to the process of determining both an exact image and the blurring function from its inexact image. This thesis presents a solution of the blind image deconvolution problem us- ing polynomial computations. The proposed solution does not require prior knowledge of the blurring function or noise level. Blind image deconvolution is needed in many applications, such as astronomy, re- mote sensing and medical X-ray, where noise is present in the exact image and blurring function. It is shown that the Sylvester resultant matrix enables the blurring function to be calculated using approx- imate greatest common divisor computations, rather than greatest common divisor computations. A developed method for the com- putation of an approximate greatest common divisor of two inexact univariate polynomials is employed here, to identify arbitrary forms of the blurring function. The deblurred image is then calculated by de- convolving the computed blurring function from the degraded image, using polynomial division. Moreover, high performance computing is considered to speed up the calculation performed in the spatial do- main. The effectiveness of the proposed solution is demonstrated by experimental results for the deblurred image and the blurring func- tion, and the results are compared with the state-of-the-art image deblurring algorithm

    Weak Gravitational Lensing by Large-Scale Structures:A Tool for Constraining Cosmology

    Get PDF
    There is now very strong evidence that our Universe is undergoing an accelerated expansion period as if it were under the influence of a gravitationally repulsive “dark energy” component. Furthermore, most of the mass of the Universe seems to be in the form of non-luminous matter, the so-called “dark matter”. Together, these “dark” components, whose nature remains unknown today, represent around 96 % of the matter-energy budget of the Universe. Unraveling the true nature of the dark energy and dark matter has thus, obviously, become one of the primary goals of present-day cosmology. Weak gravitational lensing, or weak lensing for short, is the effect whereby light emitted by distant galaxies is slightly deflected by the tidal gravitational fields of intervening foreground structures. Because it only relies on the physics of gravity, weak lensing has the unique ability to probe the distribution of mass in a direct and unbiased way. This technique is at present routinely used to study the dark matter, typical applications being the mass reconstruction of galaxy clusters and the study of the properties of dark halos surrounding galaxies. Another and more recent application of weak lensing, on which we focus in this thesis, is the analysis of the cosmological lensing signal induced by large-scale structures, the so-called “cosmic shear”. This signal can be used to measure the growth of structures and the expansion history of the Universe, which makes it particularly relevant to the study of dark energy. Of all weak lensing effects, the cosmic shear is the most subtle and its detection requires the accurate analysis of the shapes of millions of distant, faint galaxies in the near infrared. So far, the main factor limiting cosmic shear measurement accuracy has been the relatively small sky areas covered. Next-generation of wide-field, multicolor surveys will, however, overcome this hurdle by covering a much larger portion of the sky with improved image quality. The resulting statistical errors will then become subdominant compared to systematic errors, the latter becoming instead the main source of uncertainty. In fact, uncovering key properties of dark energy will only be achievable if these systematics are well understood and reduced to the required level. The major sources of uncertainty resides in the shape measurement algorithm used, the convolution of the original image by the instrumental and possibly atmospheric point spread function (PSF), the pixelation effect caused by the integration of light falling on the detector pixels and the degradation caused by various sources of noise. Measuring the Cosmic shear thus entails solving the difficult inverse problem of recovering the shear signal from blurred, pixelated and noisy galaxy images while keeping errors within the limits demanded by future weak lensing surveys. Reaching this goal is not without challenges. In fact, the best available shear measurement methods would need a tenfold improvement in accuracy to match the requirements of a space mission like Euclid from ESA, scheduled at the end of this decade. Significant progress has nevertheless been made in the last few years, with substantial contributions from initiatives such as GREAT (GRavitational lEnsing Accuracy Testing) challenges. The main objective of these open competitions is to foster the development of new and more accurate shear measurement methods. We start this work with a quick overview of modern cosmology: its fundamental tenets, achievements and the challenges it faces today. We then review the theory of weak gravitational lensing and explains how it can make use of cosmic shear observations to place constraints on cosmology. The last part of this thesis focuses on the practical challenges associated with the accurate measurement of the cosmic shear. After a review of the subject we present the main contributions we have brought in this area: the development of the gfit shear measurement method, new algorithms for point spread function (PSF) interpolation and image denoising. The gfit method emerged as one of the top performers in the GREAT10 Galaxy Challenge. It essentially consists in fitting two-dimensional elliptical Sérsic light profiles to observed galaxy image in order to produce estimates for the shear power spectrum. PSF correction is automatic and an efficient shape-preserving denoising algorithm can be optionally applied prior to fitting the data. PSF interpolation is also an important issue in shear measurement because the PSF is only known at star positions while PSF correction has to be performed at any position on the sky. We have developed innovative PSF interpolation algorithms on the occasion of the GREAT10 Star Challenge, a competition dedicated to the PSF interpolation problem. Our participation was very successful since one of our interpolation method won the Star Challenge while the remaining four achieved the next highest scores of the competition. Finally we have participated in the development of a wavelet-based, shape-preserving denoising method particularly well suited to weak lensing analysis

    Statistical and structured optimisation : methods for the approximate GCD problem.

    Get PDF
    The computation of polynomial greatest common divisors (GCDs) is a fundamental problem in algebraic computing and has important widespread applications in areas such as computing theory, control, image processing, signal processing and computer-aided design (CAD)
    corecore