22,920 research outputs found

    Parallel Block Methods for Solving Higher Order Ordinary Differential Equations Directly

    Get PDF
    Numerous problems that are encountered in various branches of science and engineering involve ordinary differential equations (ODEs). Some of these problems require lengthy computation and immediate solutions. With the availability of parallel computers nowadays, the demands can be achieved. However, most of the existing methods for solving ODEs directly, particularly of higher order, are sequential in nature. These methods approximate numerical solution at one point at a time and therefore do not fully exploit the capability of parallel computers. Hence, the development of parallel algorithms to suit these machines becomes essential. In this thesis, new explicit and implicit parallel block methods for solving a single equation of ODE directly using constant step size and back values are developed. These methods, which calculate the numerical solution at more than one point simultaneously, are parallel in nature. The programs of the methods employed are run on a shared memory Sequent Symmetry S27 parallel computer. The numerical results show that the new methods reduce the total number of steps and execution time. The accuracy of the parallel block and 1-point methods is comparable particularly when finer step sizes are used. A new parallel algorithm for solving systems of ODEs using variable step size and order is also developed. The strategies used to design this method are based on both the Direct Integration (DI) and parallel block methods. The results demonstrate the superiority of the new method in terms of the total number of steps and execution times especially with finer tolerances. In conclusion, the new methods developed can be used as viable alternatives for solving higher order ODEs directly

    High-order implicit palindromic discontinuous Galerkin method for kinetic-relaxation approximation

    Get PDF
    We construct a high order discontinuous Galerkin method for solving general hyperbolic systems of conservation laws. The method is CFL-less, matrix-free, has the complexity of an explicit scheme and can be of arbitrary order in space and time. The construction is based on: (a) the representation of the system of conservation laws by a kinetic vectorial representation with a stiff relaxation term; (b) a matrix-free, CFL-less implicit discontinuous Galerkin transport solver; and (c) a stiffly accurate composition method for time integration. The method is validated on several one-dimensional test cases. It is then applied on two-dimensional and three-dimensional test cases: flow past a cylinder, magnetohydrodynamics and multifluid sedimentation

    One step hybrid block methods with generalised off-step points for solving directly higher order ordinary differential equations

    Get PDF
    Real life problems particularly in sciences and engineering can be expressed in differential equations in order to analyse and understand the physical phenomena. These differential equations involve rates of change of one or more independent variables. Initial value problems of higher order ordinary differential equations are conventionally solved by first converting them into their equivalent systems of first order ordinary differential equations. Appropriate existing numerical methods will then be employed to solve the resulting equations. However, this approach will enlarge the number of equations. Consequently, the computational complexity will increase and thus may jeopardise the accuracy of the solution. In order to overcome these setbacks, direct methods were employed. Nevertheless, most of these methods approximate numerical solutions at one point at a time. Therefore, block methods were then introduced with the aim of approximating numerical solutions at many points simultaneously. Subsequently, hybrid block methods were introduced to overcome the zero-stability barrier occurred in the block methods. However, the existing one step hybrid block methods only focus on the specific off-step point(s). Hence, this study proposed new one step hybrid block methods with generalised off-step point(s) for solving higher order ordinary differential equations. In developing these methods, a power series was used as an approximate solution to the problems of ordinary differential equations of order g. The power series was interpolated at g points while its highest derivative was collocated at all points in the selected interval. The properties of the new methods such as order, error constant, zero-stability, consistency, convergence and region of absolute stability were also investigated. Several initial value problems of higher order ordinary differential equations were then solved using the new developed methods. The numerical results revealed that the new methods produced more accurate solutions than the existing methods when solving the same problems. Hence, the new methods are viable alternatives for solving initial value problems of higher order ordinary differential equations directly

    Direct Block Methods for Solving Special Second Order Ordinary Differential Equations and Their Parallel Implementations

    Get PDF
    This thesis focuses mainly on deriving block methods of constant step size for solving special second order ODEs. The first part of the thesis is about the construction and derivation of block methods using linear difference operator. The regions of stability for both explicit and implicit block methods are presented. The numerical results of the methods are compared with existing methods. The results suggest a significant improvement in efficiency of the new methods. The second part of the thesis describes the derivation of the r-point block methods based on Newton-Gregory backward interpolation formula. The numerical results of explicit and implicit r-point block methods are presented to illustrate the effectiveness of the methods in terms of total number of steps taken, accuracy and execution time. Both the explicit and implicit methods are more efficient compare to the existing method. The r-point block methods that calculate the solution at r-point simultaneously are suitable for parallel implementation. The parallel codes of the block methods for the solution of large systems of ODEs are developed. Hence the last part of the thesis discusses the parallel execution of the codes. The parallel algorithms are written in C language and implemented on Sun Fire V1280 distributed memory system. The fine-grained strategy is used to divide a computation into smaller parts and assign them to different processors. The performances of the r-point block methods using sequential and parallel codes are compared in terms of the total steps, execution time, speedup and efficiency. The parallel implementation of the new codes produced better speedup as the number of equations increase. The parallel codes gain better speedup and efficiency compared to sequential codes

    On the parallel solution of parabolic equations

    Get PDF
    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented

    GPU Accelerated Explicit Time Integration Methods for Electro-Quasistatic Fields

    Full text link
    Electro-quasistatic field problems involving nonlinear materials are commonly discretized in space using finite elements. In this paper, it is proposed to solve the resulting system of ordinary differential equations by an explicit Runge-Kutta-Chebyshev time-integration scheme. This mitigates the need for Newton-Raphson iterations, as they are necessary within fully implicit time integration schemes. However, the electro-quasistatic system of ordinary differential equations has a Laplace-type mass matrix such that parts of the explicit time-integration scheme remain implicit. An iterative solver with constant preconditioner is shown to efficiently solve the resulting multiple right-hand side problem. This approach allows an efficient parallel implementation on a system featuring multiple graphic processing units.Comment: 4 pages, 5 figure

    Parallel implementation of explicit 2 and 3-point block methods for solving system of special second order ODEs directly

    Get PDF
    In this paper the explicit 2 and 3-point block method for solving large systems of special second order ODEs directly is discussed. Codes based on the methods are executed in sequential and parallel. The numerical results show that parallel to sequential counterpart for solving the large system of special second order ODEs

    Task-based adaptive multiresolution for time-space multi-scale reaction-diffusion systems on multi-core architectures

    Get PDF
    A new solver featuring time-space adaptation and error control has been recently introduced to tackle the numerical solution of stiff reaction-diffusion systems. Based on operator splitting, finite volume adaptive multiresolution and high order time integrators with specific stability properties for each operator, this strategy yields high computational efficiency for large multidimensional computations on standard architectures such as powerful workstations. However, the data structure of the original implementation, based on trees of pointers, provides limited opportunities for efficiency enhancements, while posing serious challenges in terms of parallel programming and load balancing. The present contribution proposes a new implementation of the whole set of numerical methods including Radau5 and ROCK4, relying on a fully different data structure together with the use of a specific library, TBB, for shared-memory, task-based parallelism with work-stealing. The performance of our implementation is assessed in a series of test-cases of increasing difficulty in two and three dimensions on multi-core and many-core architectures, demonstrating high scalability
    corecore