38 research outputs found

    Quadrature Strategies for Constructing Polynomial Approximations

    Full text link
    Finding suitable points for multivariate polynomial interpolation and approximation is a challenging task. Yet, despite this challenge, there has been tremendous research dedicated to this singular cause. In this paper, we begin by reviewing classical methods for finding suitable quadrature points for polynomial approximation in both the univariate and multivariate setting. Then, we categorize recent advances into those that propose a new sampling approach and those centered on an optimization strategy. The sampling approaches yield a favorable discretization of the domain, while the optimization methods pick a subset of the discretized samples that minimize certain objectives. While not all strategies follow this two-stage approach, most do. Sampling techniques covered include subsampling quadratures, Christoffel, induced and Monte Carlo methods. Optimization methods discussed range from linear programming ideas and Newton's method to greedy procedures from numerical linear algebra. Our exposition is aided by examples that implement some of the aforementioned strategies

    Gradimir Milovanovic - a master in approximation and computation part ii

    Get PDF

    Application of exponential fitting techniques to numerical methods for solving differential equations

    Get PDF
    Ever since the work of Isaac Newton and Gottfried Leibniz in the late 17th century, differential equations (DEs) have been an important concept in many branches of science. Differential equations arise spontaneously in i.a. physics, engineering, chemistry, biology, economics and a lot of fields in between. From the motion of a pendulum, studied by high-school students, to the wave functions of a quantum system, studied by brave scientists: differential equations are common and unavoidable. It is therefore no surprise that a large number of mathematicians have studied, and still study these equations. The better the techniques for solving DEs, the faster the fields where they appear, can advance. Sadly, however, mathematicians have yet to find a technique (or a combination of techniques) that can solve all DEs analytically. Luckily, in the meanwhile, for a lot of applications, approximate solutions are also sufficient. The numerical methods studied in this work compute such approximations. Instead of providing the hypothetical scientist with an explicit, continuous recipe for the solution to their problem, these methods give them an approximation of the solution at a number of discrete points. Numerical methods of this type have been the topic of research since the days of Leonhard Euler, and still are. Nowadays, however, the computations are performed by digital processors, which are well-suited for these methods, even though many of the ideas predate the modern digital computer by almost a few centuries. The ever increasing power of even the smallest processor allows us to devise newer and more elaborate methods. In this work, we will look at a few well-known numerical methods for the solution of differential equations. These methods are combined with a technique called exponential fitting, which produces exponentially fitted methods: classical methods with modified coefficients. The original idea behind this technique is to improve the performance on problems with oscillatory solutions

    Solution of second kind Fredholm integral equations by means of Gauss and anti-Gauss quadrature rules

    Get PDF
    This paper is concerned with the numerical approximation of Fredholm integral equa- tions of the second kind. A Nyström method based on the anti-Gauss quadrature formula is developed and investigated in terms of stability and convergence in appro- priate weighted spaces. The Nyström interpolants corresponding to the Gauss and the anti-Gauss quadrature rules are proved to furnish upper and lower bounds for the solution of the equation, under suitable assumptions which are easily verified for a particular weight function. Hence, an error estimate is available, and the accuracy of the solution can be improved by approximating it by an averaged Nyström interpolant. The effectiveness of the proposed approach is illustrated through different numerical tests

    Numerical methods for solving ODE flow

    Get PDF

    Numerical analysis of some integral equations with singularities

    Get PDF
    In this thesis we consider new approaches to the numerical solution of a class of Volterra integral equations, which contain a kernel with singularity of non-standard type. The kernel is singular in both arguments at the origin, resulting in multiple solutions, one of which is differentiable at the origin. We consider numerical methods to approximate any of the (infinitely many) solutions of the equation. We go on to show that the use of product integration over a short primary interval, combined with the careful use of extrapolation to improve the order, may be linked to any suitable standard method away from the origin. The resulting split-interval algorithm is shown to be reliable and flexible, capable of achieving good accuracy, with convergence to the one particular smooth solution.Supported by a college bursary from the University of Chester
    corecore