38 research outputs found
Recommended from our members
Numerical Solution of Linear Ordinary Differential Equations and Differential-Algebraic Equations by Spectral Methods
This thesis involves the implementation of spectral methods, for numerical solution of linear Ordinary Differential Equations (ODEs) and linear Differential-Algebraic Equations (DAEs). First we consider ODEs with some ordinary problems, and then, focus on those problems in which the solution function or some coefficient functions have singularities. Then, by expressing weak and strong aspects of spectral methods to solve these kinds of problems, a modified pseudo-spectral method which is more efficient than other spectral methods is suggested and tested on some examples.
We extend the pseudo-spectral method to solve a system of linear ODEs and linear DAEs and compare this method with other methods such as Backward Difference Formulae (BDF), and implicit Runge-Kutta (RK) methods using some numerical examples. Furthermore, by using appropriate choice of Gauss-Chebyshev-Radau points, we will show that this method can be used to solve a linear DAE whenever some of coefficient functions have singularities by providing some examples. We also used some problems that have already been considered by some authors by finite difference methods, and compare their results with ours.
Finally, we present a short survey of properties and numerical methods for solving DAE problems and then we extend the pseudo-spectral method to solve DAE problems with variable coefficient functions. Our numerical experience shows that spectral and pseudo-spectral methods and their modified versions are very promising for linear ODE and linear DAE problems with solution or coefficient functions having singularities.
In section 3.2, a modified method for solving an ODE is introduced which is new work. Furthermore, an extension of this method for solving a DAE or system of ODEs which has been explained in section 4.6 of chapter four is also a new idea and has not been done by anyone previously.
In all chapters, wherever we talk about ODE or DAE we mean linear
Quadrature Strategies for Constructing Polynomial Approximations
Finding suitable points for multivariate polynomial interpolation and
approximation is a challenging task. Yet, despite this challenge, there has
been tremendous research dedicated to this singular cause. In this paper, we
begin by reviewing classical methods for finding suitable quadrature points for
polynomial approximation in both the univariate and multivariate setting. Then,
we categorize recent advances into those that propose a new sampling approach
and those centered on an optimization strategy. The sampling approaches yield a
favorable discretization of the domain, while the optimization methods pick a
subset of the discretized samples that minimize certain objectives. While not
all strategies follow this two-stage approach, most do. Sampling techniques
covered include subsampling quadratures, Christoffel, induced and Monte Carlo
methods. Optimization methods discussed range from linear programming ideas and
Newton's method to greedy procedures from numerical linear algebra. Our
exposition is aided by examples that implement some of the aforementioned
strategies
Application of exponential fitting techniques to numerical methods for solving differential equations
Ever since the work of Isaac Newton and Gottfried Leibniz in the late 17th century, differential equations (DEs) have been an important concept in many branches of science. Differential equations arise spontaneously in i.a. physics, engineering, chemistry, biology, economics and a lot of fields in between. From the motion of a pendulum, studied by high-school students, to the wave functions of a quantum system, studied by brave scientists: differential equations are common and unavoidable. It is therefore no surprise that a large number of mathematicians have studied, and still study these equations. The better the techniques for solving DEs, the faster the fields where they appear, can advance.
Sadly, however, mathematicians have yet to find a technique (or a combination of techniques) that can solve all DEs analytically. Luckily, in the meanwhile, for a lot of applications, approximate solutions are also sufficient. The numerical methods studied in this work compute such approximations. Instead of providing the hypothetical scientist with an explicit, continuous recipe for the solution to their problem, these methods give them an approximation of the solution at a number of discrete points. Numerical methods of this type have been the topic of research since the days of Leonhard Euler, and still are. Nowadays, however, the computations are performed by digital processors, which are well-suited for these methods, even though many of the ideas predate the modern digital computer by almost a few centuries. The ever increasing power of even the smallest processor allows us to devise newer and more elaborate methods.
In this work, we will look at a few well-known numerical methods for the solution of differential equations. These methods are combined with a technique called exponential fitting, which produces exponentially fitted methods: classical methods with modified coefficients. The original idea behind this technique is to improve the performance on problems with oscillatory solutions
Solution of second kind Fredholm integral equations by means of Gauss and anti-Gauss quadrature rules
This paper is concerned with the numerical approximation of Fredholm integral equa-
tions of the second kind. A Nyström method based on the anti-Gauss quadrature
formula is developed and investigated in terms of stability and convergence in appro-
priate weighted spaces. The Nyström interpolants corresponding to the Gauss and
the anti-Gauss quadrature rules are proved to furnish upper and lower bounds for the
solution of the equation, under suitable assumptions which are easily verified for a
particular weight function. Hence, an error estimate is available, and the accuracy of
the solution can be improved by approximating it by an averaged Nyström interpolant.
The effectiveness of the proposed approach is illustrated through different numerical
tests
Numerical analysis of some integral equations with singularities
In this thesis we consider new approaches to the numerical solution of a class of Volterra integral equations, which contain a kernel with singularity of non-standard type. The kernel is singular in both arguments at the origin, resulting in multiple solutions, one of which is differentiable at the origin. We consider numerical methods to approximate any of the (infinitely many) solutions of the equation. We go on to show that the use of product integration over a short primary interval, combined with the careful use of extrapolation to improve the order, may be linked to any suitable standard method away from the origin. The resulting split-interval algorithm is shown to be reliable and flexible, capable of achieving good accuracy, with convergence to the one particular smooth solution.Supported by a college bursary from the University of Chester