429 research outputs found

    A delay differential model of ENSO variability: Parametric instability and the distribution of extremes

    Get PDF
    We consider a delay differential equation (DDE) model for El-Nino Southern Oscillation (ENSO) variability. The model combines two key mechanisms that participate in ENSO dynamics: delayed negative feedback and seasonal forcing. We perform stability analyses of the model in the three-dimensional space of its physically relevant parameters. Our results illustrate the role of these three parameters: strength of seasonal forcing bb, atmosphere-ocean coupling κ\kappa, and propagation period τ\tau of oceanic waves across the Tropical Pacific. Two regimes of variability, stable and unstable, are separated by a sharp neutral curve in the (b,τ)(b,\tau) plane at constant κ\kappa. The detailed structure of the neutral curve becomes very irregular and possibly fractal, while individual trajectories within the unstable region become highly complex and possibly chaotic, as the atmosphere-ocean coupling κ\kappa increases. In the unstable regime, spontaneous transitions occur in the mean ``temperature'' ({\it i.e.}, thermocline depth), period, and extreme annual values, for purely periodic, seasonal forcing. The model reproduces the Devil's bleachers characterizing other ENSO models, such as nonlinear, coupled systems of partial differential equations; some of the features of this behavior have been documented in general circulation models, as well as in observations. We expect, therefore, similar behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.Comment: 22 pages, 9 figure

    Exploring efficient: numerical methods for differential equations

    Get PDF
    Numerical analysis is a way to do higher mathematical problems on a computer, a technique widely used by scientists and engineers to solve their problems. A major advantage of numerical analysis is that a numerical answer can be obtained even when a problem has no “analytical” solution. Results from numerical analysis are an approximation, which can be made as accurate as desired. The analysis of errors in numerical methods is a critically important part of the study of numerical analysis. Hence, we will see in this research that computation of the error is a must as it is a way to measure the efficiency of the numerical methods developed. Numerical methods require highly tedious and repetitive computations that can only be done using the computer. Hence in this research, it is shown that computer programs must be written for the implementation of numerical methods. In the early part of related research the computer language used was Fortran. Subsequently more and more computer programs used the C programming language. Additionally, now computations can also be carried out using softwares like MATLAB, MATHEMATICA and MAPLE. Many physical problems that arise from ordinary differential equations (ODEs) have magnitudes of eigenvalues which vary greatly, and such systems are commonly known as stiff systems. Stiff systems usually consist of a transient solution, that is, a solution which varies rapidly at the beginning of the integration. This phase is referred to as the transient phase and during this phase, accuracy rather than stability restricts the stepsize of the numerical methods used. Thus the generally the structure of the solutions suggests application of specific methods for non-stiff equations in the transient phase and specific methods for stiff equations during the steady-state phase in a manner whereby computational costs can be reduced. Consequently, in this research we developed embedded Runge-Kutta methods for solving stiff differential equations so that variable stepsize codes can be used in its implementation. We have also included intervalwise partitioning, whereby the system is considered as non-stiff first, and solved using the method with simple iterations, and once stiffness is detected, the system is solved using the same method, but with Newton iterations. By using variable stepsize code and intervalwise partitioning, we have been able to reduce the computational costs. With the aim of increasing the computational efficiency of the Runge-Kutta methods, we have also developed methods of higher order with less number of stages or function evaluations. The method used is an extension of the classical Runge-Kutta method and the approximation at the current point is based on the information at the current internal stage as well as the previous internal stage. This is the idea underlying the construction of Improved Runge-Kutta methods, so that the resulting method will give better accuracy. Usually higher order ordinary differential equations are solved by converting them into a system of first order ODEs and using numerical methods suitable for first order ODEs. However it is more efficient, in terms of accuracy, number of function evaluations as well as computational time, if the higher order ODEs can be solved directly (without being converted to a system of first order ODEs), using numerical methods. In this research we developed numerical methods, particularly Runge-Kutta type methods, which can directly solve special third order and fourth order ODEs. Special second order ODE is an ODE which does not depend on the first derivative. The solution from this type of ODE often exhibits a pronounced oscillatory character. It is well known that it is difficult to obtain accurate numerical results if the ODEs are oscillatory in nature. In order to address this problem a lot of research has been focused on developing methods which have high algebraic order, reduced phase-lag or dispersion and reduced dissipation. Phaselag is the angle between the true and approximate solution, while dissipation is the difference between the approximate solution and the standard cyclic solution. If a method has high algebraic order, high order of dispersion and dissipation, then the numerical solutions obtained will be very accurate. Hence in this research we have developed numerical methods, specifically hybrid methods which have all the above mentioned properties. If the solutions are oscillatory in nature, it means that the solutions will have components which are trigonometric functions, that is, sine and cosine functions. In order to get accurate numerical solutions we thus phase-fitted the methods using trigonometric functions. In this research, it is proven that trigonometrically-fitting the hybrid methods and applying them to solve oscillatory delay differential equations result in better numerical results. These are the highlights of my research journey, though a lot of work has also been done in developing numerical methods which are multistep in nature, for solving higher order ODEs, as well as implementation of methods developed for solving fuzzy differential equations and partial differential equations, which are not covered here

    Efficient simulation of chromatographic separation processes

    Get PDF
    This work presents the development and testing of an efficient, high resolution algorithm developed for the solution of equilibrium and non-equilibrium chromatographic problems as a means of simultaneously producing high fidelity predictions with a minimal increase in computational cost. The method involves the coupling of a high-order WENO scheme, adapted for use on non-uniform grids, with a piecewise adaptive grid (PAG) method to reduce runtime while accurately resolving the sharp gradients observed in the processes under investigation. Application of the method to a series of benchmark chromatographic test cases, within which an increasing number of components are included over short and long spatial domains and containing shocks, shows that the method is able to accurately resolve the discontinuities and that the use of the PAG method results in a reduction in the CPU runtime of up to 90%, without degradation of the solution, relative to an equivalent uniform grid

    Aproximación de ecuaciones diferenciales mediante una nueva técnica variacional y aplicaciones

    Get PDF
    [SPA] En esta Tesis presentamos el estudio teórico y numérico de sistemas de ecuaciones diferenciales basado en el análisis de un funcional asociado de forma natural al problema original. Probamos que cuando se utiliza métodos del descenso para minimizar dicho funcional, el algoritmo decrece el error hasta obtener la convergencia dada la no existencia de mínimos locales diferentes a la solución original. En cierto sentido el algoritmo puede considerarse un método tipo Newton globalmente convergente al estar basado en una linearización del problema. Se han estudiado la aproximación de ecuaciones diferenciales rígidas, de ecuaciones rígidas con retardo, de ecuaciones algebraico‐diferenciales y de problemas hamiltonianos. Esperamos que esta nueva técnica variacional pueda usarse en otro tipo de problemas diferenciales. [ENG] This thesis is devoted to the study and approximation of systems of differential equations based on an analysis of a certain error functional associated, in a natural way, with the original problem. We prove that in seeking to minimize the error by using standard descent schemes, the procedure can never get stuck in local minima, but will always and steadily decrease the error until getting to the original solution. One main step in the procedure relies on a very particular linearization of the problem, in some sense it is like a globally convergent Newton type method. We concentrate on the approximation of stiff systems of ODEs, DDEs, DAEs and Hamiltonian systems. In all these problems we need to use implicit schemes. We believe that this approach can be used in a systematic way to examine other situations and other types of equations.Universidad Politécnica de Cartagen
    corecore