57,204 research outputs found
Parallel Diagonally Implicit Runge-Kutta Methods For Solving Ordinary Differential Equations
This thesis focuses on the derivations of diagonally implicit Runge-Kutta (DIRK)
methods with the capability to be implemented by parallel executions. A few new
methods are proposed by having sparsity patterns which enable the parallelization of
methods. In the first part of the thesis, a fifth order DIRK suitable for two processors
parallel executions and DIRK methods of fourth and fifth orders suitable for three
processors are proposed. The executions of these methods are done by using fixed
stepsizes on a set of nonstiff problems. The regions of stability are presented and
numerical results of the methods are compared to the existing methods. Parallel
computations show significant time reduction when solving large systems of nonstiff
ordinary differential equations (ODEs).
The subsequent part of the thesis discusses on embedded DIRK methods suitable for
two processors implementations. Two 4(3) and also two 5(4) embedded DIRK
methods with adequate stability regions to solve stiff ODEs are proposed. Numerical experiments on stiff test problems are done based on variable stepsize strategy. An
existing code for solving stiff ODEs suitable for embedded DIRK with equal
diagonal elements is modified to accommodate the new methods with alternate
diagonal elements. Comparisons on numerical results to existing methods show a
competitive efficiency when solving small systems of stiff ODEs.
A parallel code is developed with the same capability of the modified sequential code
to handle stiff ODEs, linear and nonlinear problems. All algorithms are written in C
language and the parallel code is implemented on Sun Fire V1280 distributed
memory system. Three large scales of stiff ODEs are used to measure the parallel
performances of the new embedded methods. Results show that speedups increased
as the dimensions of the problems gets larger which is a significant contribution in
reducing the cost of computations
Parallel-iterated Runge-Kutta methods for stiff ordinary differential equations
AbstractFor the numerical integration of a stiff ordinary differential equation, fully implicit Runge-Kutta methods offer nice properties, like a high classical order and high stage order as well as an excellent stability behaviour. However, such methods need the solution of a set of highly coupled equations for the stage values and this is a considerable computational task. This paper discusses an iteration scheme to tackle this problem. By means of a suitable choice of the iteration parameters, the implicit relations for the stage values, as they occur in each iteration, can be uncoupled so that they can be solved in parallel. The resulting scheme can be cast into the class of Diagonally Implicit Runge-Kutta (DIRK) methods and, similar to these methods, requires only one LU factorization per step (per processor). The stability as well as the computational efficiency of the process strongly depends on the particular choice of the iteration parameters and on the number of iterations performed. We discuss several choices to obtain good stability and fast convergence. Based on these approaches, we wrote two codes possessing local error control and stepsize variation. We have implemented both codes on an ALLIANT FX/4 machine (four parallel vector processors and shared memory) and measured their speedup factors for a number of test problems. Furthermore, the performance of these codes is compared with the performance of the best stiff ODE codes for sequential computers, like SIMPLE, LSODE and RADAU5
Computer solution of non-linear integration formula for solving initial value problems
This thesis is concerned with the numerical
solutions of initial value problems with ordinary
differential equations and covers
single step integration methods.
focus is to study the numerical
the various aspects of
Specifically, its main
methods of non-linear
integration formula with a variety of means based on the
Contraharmonic mean (C˳M) (Evans and Yaakub [1995]), the
Centroidal mean (C˳M) (Yaakub and Evans [1995]) and the
Root-Mean-Square (RMS) (Yaakub and Evans [1993]) for
solving initial value problems.
the applications of the second
It includes a study of
order C˳M method for
parallel implementation of extrapolation methods for
ordinary differential equations with the ExDaTa schedule
by Bahoshy [1992]. Another important topic presented in
this thesis is that a fifth order five-stage explicit
Runge Kutta method or weighted Runge Kutta formula [Evans
and Yaakub [1996]) exists which is contrary to Butcher
[1987] and the theorem in Lambert ([1991] ,pp 181).
The thesis is organized as follows. An introduction
to initial value problems in ordinary differential
equations and parallel computers and software in Chapter
1, the basic preliminaries and fundamental concepts in
mathematics, an algebraic manipulation package, e.g.,
Mathematica and basic parallel processing techniques are
discussed in Chapter 2. Following in Chapter 3 is a
survey of single step methods to solve ordinary
differential equations. In this chapter, several single
step methods including the Taylor series method, Runge
Kutta method and a linear multistep method for non-stiff
and stiff problems are also considered.
Chapter 4 gives a new Runge Kutta formula for
solving initial value problems using the Contraharmonic
mean (C˳M), the Centroidal mean (C˳M) and the Root-MeanSquare
(RMS). An error and stability analysis for these
variety of means and numerical examples are also
presented. Chapter 5 discusses the parallel
implementation on the Sequent 8000 parallel computer of
the Runge-Kutta contraharmonic mean (C˳M) method with
extrapolation procedures using explicit
assignment scheduling
Kutta RK(4, 4) method
(EXDATA) strategies. A
is introduced and the
data task
new Rungetheory
and
analysis of its properties are investigated and compared
with the more popular RKF(4,5) method, are given in
Chapter 6. Chapter 7 presents a new integration method
with error control for the solution of a special class of
second order ODEs. In Chapter 8, a new weighted Runge-Kutta
fifth order method with 5 stages is introduced. By
comparison with the currently recommended RK4 ( 5) Merson
and RK5(6) Nystrom methods, the new method gives improved
results. Chapter 9 proposes a new fifth order Runge-Kutta
type method for solving oscillatory problems by the use
of trigonometric polynomial interpolation which extends
the earlier work of Gautschi [1961]. An analysis of the
convergence and stability of the new method is given with
comparison with the standard Runge-Kutta methods.
Finally, Chapter 10 summarises and presents
conclusions on the topics
discussed throughout the thesis
A parallel nearly implicit time-stepping scheme
Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable.\ud
The purpose of this article is to give an overall assessment of the parallelism of the method
Multi-Adaptive Time-Integration
Time integration of ODEs or time-dependent PDEs with required resolution of
the fastest time scales of the system, can be very costly if the system
exhibits multiple time scales of different magnitudes. If the different time
scales are localised to different components, corresponding to localisation in
space for a PDE, efficient time integration thus requires that we use different
time steps for different components.
We present an overview of the multi-adaptive Galerkin methods mcG(q) and
mdG(q) recently introduced in a series of papers by the author. In these
methods, the time step sequence is selected individually and adaptively for
each component, based on an a posteriori error estimate of the global error.
The multi-adaptive methods require the solution of large systems of nonlinear
algebraic equations which are solved using explicit-type iterative solvers
(fixed point iteration). If the system is stiff, these iterations may fail to
converge, corresponding to the well-known fact that standard explicit methods
are inefficient for stiff systems. To resolve this problem, we present an
adaptive strategy for explicit time integration of stiff ODEs, in which the
explicit method is adaptively stabilised by a small number of small,
stabilising time steps
Extrapolation-Based Implicit-Explicit Peer Methods with Optimised Stability Regions
In this paper we investigate a new class of implicit-explicit (IMEX) two-step
methods of Peer type for systems of ordinary differential equations with both
non-stiff and stiff parts included in the source term. An extrapolation
approach based on already computed stage values is applied to construct IMEX
methods with favourable stability properties. Optimised IMEX-Peer methods of
order p = 2, 3, 4, are given as result of a search algorithm carefully designed
to balance the size of the stability regions and the extrapolation errors.
Numerical experiments and a comparison to other implicit-explicit methods are
included.Comment: 21 pages, 6 figure
Extrapolation-Based Super-Convergent Implicit-Explicit Peer Methods with A-stable Implicit Part
In this paper, we extend the implicit-explicit (IMEX) methods of Peer type
recently developed in [Lang, Hundsdorfer, J. Comp. Phys., 337:203--215, 2017]
to a broader class of two-step methods that allow the construction of
super-convergent IMEX-Peer methods with A-stable implicit part. IMEX schemes
combine the necessary stability of implicit and low computational costs of
explicit methods to efficiently solve systems of ordinary differential
equations with both stiff and non-stiff parts included in the source term. To
construct super-convergent IMEX-Peer methods with favourable stability
properties, we derive necessary and sufficient conditions on the coefficient
matrices and apply an extrapolation approach based on already computed stage
values. Optimised super-convergent IMEX-Peer methods of order s+1 for s=2,3,4
stages are given as result of a search algorithm carefully designed to balance
the size of the stability regions and the extrapolation errors. Numerical
experiments and a comparison to other IMEX-Peer methods are included.Comment: 22 pages, 4 figures. arXiv admin note: text overlap with
arXiv:1610.0051
Task-based adaptive multiresolution for time-space multi-scale reaction-diffusion systems on multi-core architectures
A new solver featuring time-space adaptation and error control has been
recently introduced to tackle the numerical solution of stiff
reaction-diffusion systems. Based on operator splitting, finite volume adaptive
multiresolution and high order time integrators with specific stability
properties for each operator, this strategy yields high computational
efficiency for large multidimensional computations on standard architectures
such as powerful workstations. However, the data structure of the original
implementation, based on trees of pointers, provides limited opportunities for
efficiency enhancements, while posing serious challenges in terms of parallel
programming and load balancing. The present contribution proposes a new
implementation of the whole set of numerical methods including Radau5 and
ROCK4, relying on a fully different data structure together with the use of a
specific library, TBB, for shared-memory, task-based parallelism with
work-stealing. The performance of our implementation is assessed in a series of
test-cases of increasing difficulty in two and three dimensions on multi-core
and many-core architectures, demonstrating high scalability
- …