8 research outputs found

    Adapted block hybrid method for the numerical solution of duffing equations and related problems

    Full text link
    Problems of non-linear equations to model real-life phenomena have a long history in science and engineering. One of the popular of such non-linear equations is the Duffing equation. An adapted block hybrid numerical integrator that is dependent on a fixed frequency and fixed step length is proposed for the integration of Duffing equations. The stability and convergence of the method are demonstrated; its accuracy and efficiency are also established

    Exploring efficient: numerical methods for differential equations

    Get PDF
    Numerical analysis is a way to do higher mathematical problems on a computer, a technique widely used by scientists and engineers to solve their problems. A major advantage of numerical analysis is that a numerical answer can be obtained even when a problem has no “analytical” solution. Results from numerical analysis are an approximation, which can be made as accurate as desired. The analysis of errors in numerical methods is a critically important part of the study of numerical analysis. Hence, we will see in this research that computation of the error is a must as it is a way to measure the efficiency of the numerical methods developed. Numerical methods require highly tedious and repetitive computations that can only be done using the computer. Hence in this research, it is shown that computer programs must be written for the implementation of numerical methods. In the early part of related research the computer language used was Fortran. Subsequently more and more computer programs used the C programming language. Additionally, now computations can also be carried out using softwares like MATLAB, MATHEMATICA and MAPLE. Many physical problems that arise from ordinary differential equations (ODEs) have magnitudes of eigenvalues which vary greatly, and such systems are commonly known as stiff systems. Stiff systems usually consist of a transient solution, that is, a solution which varies rapidly at the beginning of the integration. This phase is referred to as the transient phase and during this phase, accuracy rather than stability restricts the stepsize of the numerical methods used. Thus the generally the structure of the solutions suggests application of specific methods for non-stiff equations in the transient phase and specific methods for stiff equations during the steady-state phase in a manner whereby computational costs can be reduced. Consequently, in this research we developed embedded Runge-Kutta methods for solving stiff differential equations so that variable stepsize codes can be used in its implementation. We have also included intervalwise partitioning, whereby the system is considered as non-stiff first, and solved using the method with simple iterations, and once stiffness is detected, the system is solved using the same method, but with Newton iterations. By using variable stepsize code and intervalwise partitioning, we have been able to reduce the computational costs. With the aim of increasing the computational efficiency of the Runge-Kutta methods, we have also developed methods of higher order with less number of stages or function evaluations. The method used is an extension of the classical Runge-Kutta method and the approximation at the current point is based on the information at the current internal stage as well as the previous internal stage. This is the idea underlying the construction of Improved Runge-Kutta methods, so that the resulting method will give better accuracy. Usually higher order ordinary differential equations are solved by converting them into a system of first order ODEs and using numerical methods suitable for first order ODEs. However it is more efficient, in terms of accuracy, number of function evaluations as well as computational time, if the higher order ODEs can be solved directly (without being converted to a system of first order ODEs), using numerical methods. In this research we developed numerical methods, particularly Runge-Kutta type methods, which can directly solve special third order and fourth order ODEs. Special second order ODE is an ODE which does not depend on the first derivative. The solution from this type of ODE often exhibits a pronounced oscillatory character. It is well known that it is difficult to obtain accurate numerical results if the ODEs are oscillatory in nature. In order to address this problem a lot of research has been focused on developing methods which have high algebraic order, reduced phase-lag or dispersion and reduced dissipation. Phaselag is the angle between the true and approximate solution, while dissipation is the difference between the approximate solution and the standard cyclic solution. If a method has high algebraic order, high order of dispersion and dissipation, then the numerical solutions obtained will be very accurate. Hence in this research we have developed numerical methods, specifically hybrid methods which have all the above mentioned properties. If the solutions are oscillatory in nature, it means that the solutions will have components which are trigonometric functions, that is, sine and cosine functions. In order to get accurate numerical solutions we thus phase-fitted the methods using trigonometric functions. In this research, it is proven that trigonometrically-fitting the hybrid methods and applying them to solve oscillatory delay differential equations result in better numerical results. These are the highlights of my research journey, though a lot of work has also been done in developing numerical methods which are multistep in nature, for solving higher order ODEs, as well as implementation of methods developed for solving fuzzy differential equations and partial differential equations, which are not covered here

    High Order Multistep Methods with Improved Phase-Lag Characteristics for the Integration of the Schr\"odinger Equation

    Full text link
    In this work we introduce a new family of twelve-step linear multistep methods for the integration of the Schr\"odinger equation. The new methods are constructed by adopting a new methodology which improves the phase lag characteristics by vanishing both the phase lag function and its first derivatives at a specific frequency. This results in decreasing the sensitivity of the integration method on the estimated frequency of the problem. The efficiency of the new family of methods is proved via error analysis and numerical applications.Comment: 36 pages, 6 figure

    CONTINUOUS IMPLICIT HYBRID ONE-STEP METHODS FOR THE SOLUTION OF INITIAL VALUE PROBLEMS OF GENERAL SECOND-ORDER ORDINARY DIFFERENTIAL EQUATIONS

    Get PDF
    The numerical solutions of initial value problems of general second order ordinary differential equations have been studied in this work. A new class of continuous implicit hybrid one step methods capable of solving initial value problems of general second order ordinary differential equations has been developed using the collocation and interpolation technique on the power series approximate solution. The one step method was augmented by the introduction of offstep points in order to circumvent Dahlquist zero stability barrier and upgrade the order of consistency of the methods. The new class of continuous implicit hybrid one step methods has the advantage of easy change of step length and evaluation of functions at offstep points. The Block method used to implement the main method guarantees that each discrete method obtained from the simultaneous solution of the block has the same order of accuracy as the main method. Hence, the new class of one step methods gives high order of accuracy with very low error constants, gives large intervals of absolute stability, are zero stable and converge. Sample examples of linear, nonlinear and stiff problems have been used to test the performance of the methods as well as to compare computed results and the associated errors with the exact solutions and errors of results obtained from existing methods, respectively, in terms of step number and order of accuracy, using written effcient computer codes

    A Family of Trigonometrically Fitted Enright Second Derivative Methods for Stiff and Oscillatory Initial Value Problems

    Get PDF
    A family of Enright’s second derivative formulas with trigonometric basis functions is derived using multistep collocation method. The continuous schemes obtained are used to generate complementary methods. The stability properties of the methods are discussed. The methods which can be applied in predictor-corrector form are implemented in block form as simultaneous numerical integrators over nonoverlapping intervals. Numerical results obtained using the proposed block form reveal that the new methods are efficient and highly competitive with existing methods in the literature

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal
    corecore