19,928 research outputs found

    Analytical solution of linear ordinary differential equations by differential transfer matrix method

    Full text link
    We report a new analytical method for exact solution of homogeneous linear ordinary differential equations with arbitrary order and variable coefficients. The method is based on the definition of jump transfer matrices and their extension into limiting differential form. The approach reduces the nnth-order differential equation to a system of nn linear differential equations with unity order. The full analytical solution is then found by the perturbation technique. The important feature of the presented method is that it deals with the evolution of independent solutions, rather than its derivatives. We prove the validity of method by direct substitution of the solution in the original differential equation. We discuss the general properties of differential transfer matrices and present several analytical examples, showing the applicability of the method. We show that the Abel-Liouville-Ostogradski theorem can be easily recovered through this approach

    Steady and Stable: Numerical Investigations of Nonlinear Partial Differential Equations

    Full text link
    Excerpt: Mathematics is a language which can describe patterns in everyday life as well as abstract concepts existing only in our minds. Patterns exist in data, functions, and sets constructed around a common theme, but the most tangible patterns are visual. Visual demonstrations can help undergraduate students connect to abstract concepts in advanced mathematical courses. The study of partial differential equations, in particular, benefits from numerical analysis and simulation

    The Magnus expansion and some of its applications

    Get PDF
    Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as Time-Dependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial re-summation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related non-perturbative expansions. Second, to provide a bridge with its implementation as generator of especial purpose numerical integration methods, a field of intense activity during the last decade. Third, to illustrate with examples the kind of results one can expect from Magnus expansion in comparison with those from both perturbative schemes and standard numerical integrators. We buttress this issue with a revision of the wide range of physical applications found by Magnus expansion in the literature.Comment: Report on the Magnus expansion for differential equations and its applications to several physical problem

    Magnus' expansion as an approximation tool for ordinary differential equations

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2005Magnus' expansion approximates the solution of a linear, nonconstant-coefficient system of ordinary differential equations (ODEs) as the exponential of an infinite series of integrals of commutators of the matrix-valued coefficient function. It generalizes a standard technique for solving first-order, scalar, linear ODEs. However, much about the convergence of Magnus' expansion and its efficient computation is not known. This thesis describes in detail the derivation of Magnus' expansion and reviews Iserles' ordering for efficient calculation. Convergence of the expansion is explored and known convergence estimates are applied. Finally, Magnus' expansion is applied to several numerical examples, keeping track of convergence as it depends on parameters. These examples demonstrate the failure of current convergence estimates to correctly account for the degree of commutativity of the matrix-valued coefficient function.Introduction -- Motivation : systems arising from machining applications -- Geometric integration -- General theory of ordinary differential equations -- Existence and uniqueness of solutions -- Fundamental solutions -- Classical methods for approximating a fundamental solution Picard Iteration -- Hausdorff's equation -- Derivation of Hausdorff's equation for ... -- Solving the Linear Operator Equation ... Magnus' expansion -- Estimates for convergence of Magnus' expansion -- Examples -- The Mathieu example -- A non-commutative example -- A Frenet example -- Conclusions -- List of references -- Index

    A new approach for solving nonlinear Thomas-Fermi equation based on fractional order of rational Bessel functions

    Full text link
    In this paper, the fractional order of rational Bessel functions collocation method (FRBC) to solve Thomas-Fermi equation which is defined in the semi-infinite domain and has singularity at x=0x = 0 and its boundary condition occurs at infinity, have been introduced. We solve the problem on semi-infinite domain without any domain truncation or transformation of the domain of the problem to a finite domain. This approach at first, obtains a sequence of linear differential equations by using the quasilinearization method (QLM), then at each iteration solves it by FRBC method. To illustrate the reliability of this work, we compare the numerical results of the present method with some well-known results in other to show that the new method is accurate, efficient and applicable
    • …
    corecore