73 research outputs found

    A New Trigonometrically Fitted Two-Derivative Runge-Kutta Method for the Numerical Solution of the Schrödinger Equation and Related Problems

    Get PDF
    A new trigonometrically fitted fifth-order two-derivative Runge-Kutta method with variable nodes is developed for the numerical solution of the radial Schrödinger equation and related oscillatory problems. Linear stability and phase properties of the new method are examined. Numerical results are reported to show the robustness and competence of the new method compared with some highly efficient methods in the recent literature

    Analytical and Numerical Methods for Differential Equations and Applications

    Get PDF
    The book is a printed version of the Special issue Analytical and Numerical Methods for Differential Equations and Applications, published in Frontiers in Applied Mathematics and Statistic

    Exploring efficient: numerical methods for differential equations

    Get PDF
    Numerical analysis is a way to do higher mathematical problems on a computer, a technique widely used by scientists and engineers to solve their problems. A major advantage of numerical analysis is that a numerical answer can be obtained even when a problem has no “analytical” solution. Results from numerical analysis are an approximation, which can be made as accurate as desired. The analysis of errors in numerical methods is a critically important part of the study of numerical analysis. Hence, we will see in this research that computation of the error is a must as it is a way to measure the efficiency of the numerical methods developed. Numerical methods require highly tedious and repetitive computations that can only be done using the computer. Hence in this research, it is shown that computer programs must be written for the implementation of numerical methods. In the early part of related research the computer language used was Fortran. Subsequently more and more computer programs used the C programming language. Additionally, now computations can also be carried out using softwares like MATLAB, MATHEMATICA and MAPLE. Many physical problems that arise from ordinary differential equations (ODEs) have magnitudes of eigenvalues which vary greatly, and such systems are commonly known as stiff systems. Stiff systems usually consist of a transient solution, that is, a solution which varies rapidly at the beginning of the integration. This phase is referred to as the transient phase and during this phase, accuracy rather than stability restricts the stepsize of the numerical methods used. Thus the generally the structure of the solutions suggests application of specific methods for non-stiff equations in the transient phase and specific methods for stiff equations during the steady-state phase in a manner whereby computational costs can be reduced. Consequently, in this research we developed embedded Runge-Kutta methods for solving stiff differential equations so that variable stepsize codes can be used in its implementation. We have also included intervalwise partitioning, whereby the system is considered as non-stiff first, and solved using the method with simple iterations, and once stiffness is detected, the system is solved using the same method, but with Newton iterations. By using variable stepsize code and intervalwise partitioning, we have been able to reduce the computational costs. With the aim of increasing the computational efficiency of the Runge-Kutta methods, we have also developed methods of higher order with less number of stages or function evaluations. The method used is an extension of the classical Runge-Kutta method and the approximation at the current point is based on the information at the current internal stage as well as the previous internal stage. This is the idea underlying the construction of Improved Runge-Kutta methods, so that the resulting method will give better accuracy. Usually higher order ordinary differential equations are solved by converting them into a system of first order ODEs and using numerical methods suitable for first order ODEs. However it is more efficient, in terms of accuracy, number of function evaluations as well as computational time, if the higher order ODEs can be solved directly (without being converted to a system of first order ODEs), using numerical methods. In this research we developed numerical methods, particularly Runge-Kutta type methods, which can directly solve special third order and fourth order ODEs. Special second order ODE is an ODE which does not depend on the first derivative. The solution from this type of ODE often exhibits a pronounced oscillatory character. It is well known that it is difficult to obtain accurate numerical results if the ODEs are oscillatory in nature. In order to address this problem a lot of research has been focused on developing methods which have high algebraic order, reduced phase-lag or dispersion and reduced dissipation. Phaselag is the angle between the true and approximate solution, while dissipation is the difference between the approximate solution and the standard cyclic solution. If a method has high algebraic order, high order of dispersion and dissipation, then the numerical solutions obtained will be very accurate. Hence in this research we have developed numerical methods, specifically hybrid methods which have all the above mentioned properties. If the solutions are oscillatory in nature, it means that the solutions will have components which are trigonometric functions, that is, sine and cosine functions. In order to get accurate numerical solutions we thus phase-fitted the methods using trigonometric functions. In this research, it is proven that trigonometrically-fitting the hybrid methods and applying them to solve oscillatory delay differential equations result in better numerical results. These are the highlights of my research journey, though a lot of work has also been done in developing numerical methods which are multistep in nature, for solving higher order ODEs, as well as implementation of methods developed for solving fuzzy differential equations and partial differential equations, which are not covered here

    Geometric Integrators for Schrödinger Equations

    Full text link
    The celebrated Schrödinger equation is the key to understanding the dynamics of quantum mechanical particles and comes in a variety of forms. Its numerical solution poses numerous challenges, some of which are addressed in this work. Arguably the most important problem in quantum mechanics is the so-called harmonic oscillator due to its good approximation properties for trapping potentials. In Chapter 2, an algebraic correspondence-technique is introduced and applied to construct efficient splitting algorithms, based solely on fast Fourier transforms, which solve quadratic potentials in any number of dimensions exactly - including the important case of rotating particles and non-autonomous trappings after averaging by Magnus expansions. The results are shown to transfer smoothly to the Gross-Pitaevskii equation in Chapter 3. Additionally, the notion of modified nonlinear potentials is introduced and it is shown how to efficiently compute them using Fourier transforms. It is shown how to apply complex coefficient splittings to this nonlinear equation and numerical results corroborate the findings. In the semiclassical limit, the evolution operator becomes highly oscillatory and standard splitting methods suffer from exponentially increasing complexity when raising the order of the method. Algorithms with only quadratic order-dependence of the computational cost are found using the Zassenhaus algorithm. In contrast to classical splittings, special commutators are allowed to appear in the exponents. By construction, they are rapidly decreasing in size with the semiclassical parameter and can be exponentiated using only a few Lanczos iterations. For completeness, an alternative technique based on Hagedorn wavepackets is revisited and interpreted in the light of Magnus expansions and minor improvements are suggested. In the presence of explicit time-dependencies in the semiclassical Hamiltonian, the Zassenhaus algorithm requires a special initiation step. Distinguishing the case of smooth and fast frequencies, it is shown how to adapt the mechanism to obtain an efficiently computable decomposition of an effective Hamiltonian that has been obtained after Magnus expansion, without having to resolve the oscillations by taking a prohibitively small time-step. Chapter 5 considers the Schrödinger eigenvalue problem which can be formulated as an initial value problem after a Wick-rotating the Schrödinger equation to imaginary time. The elliptic nature of the evolution operator restricts standard splittings to low order, ¿ < 3, because of the unavoidable appearance of negative fractional timesteps that correspond to the ill-posed integration backwards in time. The inclusion of modified potentials lifts the order barrier up to ¿ < 5. Both restrictions can be circumvented using complex fractional time-steps with positive real part and sixthorder methods optimized for near-integrable Hamiltonians are presented. Conclusions and pointers to further research are detailed in Chapter 6, with a special focus on optimal quantum control.Bader, PK. (2014). Geometric Integrators for Schrödinger Equations [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/38716TESISPremios Extraordinarios de tesis doctorale

    Scalar Auxiliary Variable/Lagrange multiplier based pseudospectral schemes for the dynamics of nonlinear Schrödinger/Gross-Pitaevskii equations

    Get PDF
    International audienceIn this paper, based on the Scalar Auxiliary Variable (SAV) approach and a newly proposed Lagrange multiplier (LagM) approach originally constructed for gradient flows, we propose two linear implicit pseudo-spectral schemes for simulating the dynamics of general nonlinear Schrödinger/Gross-Pitaevskii equations. Both schemes are of spectral/second-order accuracy in spatial/temporal direction. The SAV based scheme preserves a modified total energy and approximate the mass to third order (with respect to time steps), while the LagM based scheme could preserve exactly the mass and original total energy. A nonlinear algebraic system has to be solved at every time step for the LagM based scheme, hence the SAV scheme is usually more efficient than the LagM one. On the other hand, the LagM scheme may outperform the SAV ones in the sense that it conserves the original total energy and mass and usually admits smaller errors. Ample numerical results are presented to show the effectiveness, accuracy and performance of the proposed schemes

    Mathematical and Numerical Aspects of Dynamical System Analysis

    Get PDF
    From Preface: This is the fourteenth time when the conference “Dynamical Systems: Theory and Applications” gathers a numerous group of outstanding scientists and engineers, who deal with widely understood problems of theoretical and applied dynamics. Organization of the conference would not have been possible without a great effort of the staff of the Department of Automation, Biomechanics and Mechatronics. The patronage over the conference has been taken by the Committee of Mechanics of the Polish Academy of Sciences and Ministry of Science and Higher Education of Poland. It is a great pleasure that our invitation has been accepted by recording in the history of our conference number of people, including good colleagues and friends as well as a large group of researchers and scientists, who decided to participate in the conference for the first time. With proud and satisfaction we welcomed over 180 persons from 31 countries all over the world. They decided to share the results of their research and many years experiences in a discipline of dynamical systems by submitting many very interesting papers. This year, the DSTA Conference Proceedings were split into three volumes entitled “Dynamical Systems” with respective subtitles: Vibration, Control and Stability of Dynamical Systems; Mathematical and Numerical Aspects of Dynamical System Analysis and Engineering Dynamics and Life Sciences. Additionally, there will be also published two volumes of Springer Proceedings in Mathematics and Statistics entitled “Dynamical Systems in Theoretical Perspective” and “Dynamical Systems in Applications”

    Numerical solution of Y” = F(X,Y) with particular reference to the radical schrödinger eqution

    Get PDF
    Many theoretical treatments of quantum-mechanical scattering processes require the numerical solution of a set of second order ordinary differential equations of special form (with first derivative absent). The methods used to solve such sets of equations are generally based on step-by-step methods for solving a single second order differential equation over a fixed mesh. For example Chandra (1973) has published a computer program which uses de Vogelaere's method to solve the differential equations arising in a close-coupling formulation of quantum mechanical scattering problems. Chandra's program makes no attempt to monitor the local truncation error and leaves the choice of steplength strategy entirely to the user. Our aim is to improve on existing implementations of de Vogelaere's method for a single second order equation by incorporating a method of truncation error estimation and an automatic mesh-selection facility. Estimates of the truncation error in de Vogelaere's method are established together with an upper bound for the local truncation error; the interval of absolute stability is found to be [-2,0] and it is shown that the global truncation error is of order h(^4) where h is the steplength. In addition the characteristics of a method due to Raptis and Allison are investigated. A numerical comparison of computer programs which incorporate the methods of de Vogelaere, Numerov, Raptis and Allison and Adams-Bashforth Adams-Moulton, with an automatic error control is performed to determine which program gives the most reliable and efficient solution of the single channel radial Schrödinger equation. A modification of Chandra's program is provided which performs the numerical integration of a set of coupled second order homogeneous differential equations using de Vogelaere's method with an automatic error control
    corecore