1,073 research outputs found

    Implicit Runge-Kutta formulae for the numerical integration of ODEs

    Get PDF
    Imperial Users onl

    The application of generalized, cyclic, and modified numerical integration algorithms to problems of satellite orbit computation

    Get PDF
    Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program

    A Sixth-Order Extension to the MATLAB Package bvp4c of J. Kierzenka and L. Shampine

    Get PDF
    A new two-point boundary value problem algorithm based upon the MATLAB bvp4c package of Kierzenka and Shampine is described. The algorithm, implemented in a new package bvp6c, uses the residual control framework of bvp4c (suitably modified for a more accurate finite difference approximation) to maintain a user specified accuracy. The new package is demonstrated to be as robust as the existing software, but more efficient for most problems, requiring fewer internal mesh points and evaluations to achieve the required accuracy

    Diagonal - implicity iterated Runge-Kutta methods on distributed memory multiprocessors

    Get PDF
    We investigate the parallel implementation of the diagonal-implicitly iterated Ruge-Kutta (DIIRK) method, an iteration method based on a predictor-corrector scheme. This method is appropriate for the solution of stiff systems of ordinary differential equations (ODEs) and provides embedded formulae to control the stepsize. We discuss different strategies for the implementation of the DIIRK method on distributed memory multiprocessors which mainly differ in the order of independent computations and the data distribution. In particular, we consider a consecutive implementation that executes the steps of each corrector iteration in sequential order and distributes the resulting equation systems among all available processors, and a group implementation that executes the steps in parallel by independent groups of processors. The performance of these implementations depends on the right hand side of the ODE system: For sparse functions, the group implementations is superior and achieves medium range seedup values. For dense functions, the consecutive implementation is better and achieves good speedup values.

    High-order convergent deferred correction schemes based on parameterized Runge-Kutta-Nyström methods for second-order boundary value problems

    Get PDF
    AbstractIterated deferred correction is a widely used approach to the numerical solution of first-order systems of nonlinear two-point boundary value problems. Normally, the orders of accuracy of the various methods used in a deferred correction scheme differ by 2 and, as a direct result, each time deferred correction is used the order of the overall scheme is increased by a maximum of 2. In [16], however, it has been shown that there exist schemes based on parameterized Runge–Kutta methods, which allow a higher increase of the overall order. A first example of such a high-order convergent scheme which allows an increase of 4 orders per deferred correction was based on two mono-implicit Runge–Kutta methods. In the present paper, we will investigate the possibility for high-order convergence of schemes for the numerical solution of second-order nonlinear two-point boundary value problems not containing the first derivative. Two examples of such high-order convergent schemes, based on parameterized Runge–Kutta-Nyström methods of orders 4 and 8, are analysed and discussed

    The Magnus expansion and some of its applications

    Get PDF
    Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as Time-Dependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial re-summation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related non-perturbative expansions. Second, to provide a bridge with its implementation as generator of especial purpose numerical integration methods, a field of intense activity during the last decade. Third, to illustrate with examples the kind of results one can expect from Magnus expansion in comparison with those from both perturbative schemes and standard numerical integrators. We buttress this issue with a revision of the wide range of physical applications found by Magnus expansion in the literature.Comment: Report on the Magnus expansion for differential equations and its applications to several physical problem
    • …
    corecore