1,001 research outputs found

    Exponentially-fitted Gauss-Laguerre quadrature rule for integrals over an unbounded interval

    Get PDF
    New quadrature formulae are introduced for the computation of integrals over the whole positive semiaxis when the integrand has an oscillatory behavior with decaying envelope. The new formulae are derived by exponential fitting, and they represent a generalization of the usual Gauss-Laguerre formulae. Their weights and nodes depend on the frequency of oscillation in the integrand, and thus the accuracy is massively increased. Rules with one up to six nodes are treated with details. Numerical illustrations are also presented

    Two classes of linearly implicit numerical methods for stiff problems: analysis and MATLAB software

    Get PDF
    The purpose of this work lies in the writing of efficient and optimized Matlab codes to implement two classes of promising linearly implicit numerical schemes that can be used to accurately and stably solve stiff Ordinary Differential Equations (ODEs), and also Partial Differential Equations (PDEs) through the Method Of Lines (MOL). Such classes of methods are the Runge-Kutta (RK) [28] and the Peer [17], and have been constructed using a variant of the Exponential-Fitting (EF) technique [27]. We carry out numerical tests to compare the two methods with each other, and also with the well known and very used Gaussian RK method, by the point of view of stability, accuracy and computational cost, in order to show their convenience

    Review of Summation-by-parts schemes for initial-boundary-value problems

    Full text link
    High-order finite difference methods are efficient, easy to program, scales well in multiple dimensions and can be modified locally for various reasons (such as shock treatment for example). The main drawback have been the complicated and sometimes even mysterious stability treatment at boundaries and interfaces required for a stable scheme. The research on summation-by-parts operators and weak boundary conditions during the last 20 years have removed this drawback and now reached a mature state. It is now possible to construct stable and high order accurate multi-block finite difference schemes in a systematic building-block-like manner. In this paper we will review this development, point out the main contributions and speculate about the next lines of research in this area

    A fast lattice Green's function method for solving viscous incompressible flows on unbounded domains

    Get PDF
    A computationally efficient method for solving three-dimensional, viscous, incompressible flows on unbounded domains is presented. The method formally discretizes the incompressible Navier–Stokes equations on an unbounded staggered Cartesian grid. Operations are limited to a finite computational domain through a lattice Green's function technique. This technique obtains solutions to inhomogeneous difference equations through the discrete convolution of source terms with the fundamental solutions of the discrete operators. The differential algebraic equations describing the temporal evolution of the discrete momentum equation and incompressibility constraint are numerically solved by combining an integrating factor technique for the viscous term and a half-explicit Runge–Kutta scheme for the convective term. A projection method that exploits the mimetic and commutativity properties of the discrete operators is used to efficiently solve the system of equations that arises in each stage of the time integration scheme. Linear complexity, fast computation rates, and parallel scalability are achieved using recently developed fast multipole methods for difference equations. The accuracy and physical fidelity of solutions are verified through numerical simulations of vortex rings

    Application of exponential fitting techniques to numerical methods for solving differential equations

    Get PDF
    Ever since the work of Isaac Newton and Gottfried Leibniz in the late 17th century, differential equations (DEs) have been an important concept in many branches of science. Differential equations arise spontaneously in i.a. physics, engineering, chemistry, biology, economics and a lot of fields in between. From the motion of a pendulum, studied by high-school students, to the wave functions of a quantum system, studied by brave scientists: differential equations are common and unavoidable. It is therefore no surprise that a large number of mathematicians have studied, and still study these equations. The better the techniques for solving DEs, the faster the fields where they appear, can advance. Sadly, however, mathematicians have yet to find a technique (or a combination of techniques) that can solve all DEs analytically. Luckily, in the meanwhile, for a lot of applications, approximate solutions are also sufficient. The numerical methods studied in this work compute such approximations. Instead of providing the hypothetical scientist with an explicit, continuous recipe for the solution to their problem, these methods give them an approximation of the solution at a number of discrete points. Numerical methods of this type have been the topic of research since the days of Leonhard Euler, and still are. Nowadays, however, the computations are performed by digital processors, which are well-suited for these methods, even though many of the ideas predate the modern digital computer by almost a few centuries. The ever increasing power of even the smallest processor allows us to devise newer and more elaborate methods. In this work, we will look at a few well-known numerical methods for the solution of differential equations. These methods are combined with a technique called exponential fitting, which produces exponentially fitted methods: classical methods with modified coefficients. The original idea behind this technique is to improve the performance on problems with oscillatory solutions

    The Magnus expansion and some of its applications

    Get PDF
    Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as Time-Dependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial re-summation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related non-perturbative expansions. Second, to provide a bridge with its implementation as generator of especial purpose numerical integration methods, a field of intense activity during the last decade. Third, to illustrate with examples the kind of results one can expect from Magnus expansion in comparison with those from both perturbative schemes and standard numerical integrators. We buttress this issue with a revision of the wide range of physical applications found by Magnus expansion in the literature.Comment: Report on the Magnus expansion for differential equations and its applications to several physical problem
    • …
    corecore