260,008 research outputs found

    Model of the telegraph line and its numerical solution

    Get PDF
    This paper deals with a model of the telegraph line that consists of system of ordinary differential equations, rather than partial differential telegraph equation. Numerical solution is then based on an original mathematical method. This method uses the Taylor series for solving ordinary differential equations with initial condition - initial value problems in a non-traditional way. Systems of ordinary differential equations are solved using variable order, variable step-size Modern Taylor Series Method. The Modern Taylor Series Method is based on a recurrent calculation of the Taylor series terms for each time interval. The second part of paper presents the solution of linear problems which comes from the model of telegraph line. All experiments were performed using MATLAB software, the newly developed linear solver that uses Modern Taylor Series Method. Linear solver was compared with the state of the art solvers in MATLAB and SPICE software.Web of Science81171

    Direct solution of fourth order ordinary differential equations using a one step hybrid block method of order five

    Get PDF
    In this article, a power series of order eight is adopted as a basis function to develop one step hybrid block method with three off step points for solving general fourth order ordinary differential equations. The strategy is employed for the developing this method are interpolating the power series at xnx_n and all off-step points and collocating its fourth derivative at all points in the selected interval. The method derived is proven to be consistent, zero stable and convergent with order five. Taylor’s series is used to supply the starting values for the implementation of the method while the performance of the method is tasted by solving linear and non-linear problems

    Generalized differential transformation method for solving two-interval Weber equation subject to transmission conditions

    Get PDF
    The main goal of this study is to adapt the classical differential transformation method to solve new types of boundary value problems. The advantage of this method lies in its simplicity, since there is no need for discretization, perturbation or linearization of the differential equation being solved. It is an efficient technique for obtaining series solution for both linear and nonlinear differential equations and differs from the classical Taylor’s series method, which requires the calculation of the values of higher derivatives of given function. It is known that the differential transformation method is designed for solving single interval problems and it is not clear how to apply it to many-interval problems. In this paper we have adapted the classical differential transformation method for solving boundary value problems for two-interval differential equations. To substantiate the proposed new technique, a boundary value problem was solved for the Weber equation given on two non-intersecting segments with a common end, on which the left and right solutions were connected by two additional transmission conditions

    Introduzione

    Get PDF
    EnBy imposing some conditions on the discrete dynamical system generated by p-steps discretization methods, we find two classes of difference equations of order p which are suitable for solving ordinary differential equations.These generate in particular two non linear methods wich turn out to be A-stable according to the current definitions; the first of them is a two steps methods of the second order, the second one is a three steps method of the third order. Both are particularly suitable in the case of stiff differential equations and whenever the interval of integration is very large

    Numerically Approximating Parabolic PDEs using Deep Learning

    Get PDF
    In this thesis, we demonstrate the use of machine learning in numerically solving both linear and non-linear parabolic partial differential equations. By using deep learning, rather than more traditional, established numerical methods (for example, Monte Carlo sampling) to calculate numeric solutions to such problems, we can tackle even very high dimensional problems, potentially overcoming the curse of dimensionality. This happens when the computational complexity of a problem grows exponentially with the number of dimensions. In Chapter 1, we describe the derivation of the computational problem needed to apply the deep learning method in the case of the linear Kolmogorov PDE. We start with an introduction to a few core concepts in Stochastic Analysis, particularly Stochastic Differential Equations, and define the Kolmogorov Backward Equation. We describe how the Feynman-Kac theorem means that the solution to the linear Kolmogorov PDE is a conditional expectation, and therefore how we can turn the numerical approximation of solving such a PDE into a minimisation. Chapter 2 discusses the key ideas behind the terminology deep learning; specifically, what a neural network is and how we can apply this to solve the minimisation problem from Chapter 1. We describe the key features of a neural network, the training process, and how parameters can be learned through a gradient descent based optimisation. We summarise the numerical method in Algorithm 1. In Chapter 3, we implement a neural network and train it to solve a 100-dimensional linear Black-Scholes PDE with underlying geometric Brownian motion, and similarly with correlated Brownian motion. We also illustrate an example with a non-linear auxiliary Itô process: the Stochastic Lorenz Equation. We additionally compute a solution to the geometric Brownian motion problem in 1 dimensions, and compare the accuracy of the solution found by the neural network and that found by two other numerical methods: Monte Carlo sampling and finite differences, as well as the solution found using the implicit formula for the solution. For 2-dimensions, the solution of the geometric Brownian motion problem is compared against a solution obtained by Monte Carlo sampling, which shows that the neural network approximation falls within the 99\% confidence interval of the Monte Carlo estimate. We also investigate the impact of the frequency of re-sampling training data and the batch size on the rate of convergence of the neural network. Chapter 4 describes the derivation of the equivalent minimisation problem for solving a Kolmogorov PDE with non-linear coefficients, where we discretise the PDE in time, and derive an approximate Feynman-Kac representation on each time step. Chapter 5 demonstrates the method on an example of a non-linear Black-Scholes PDE and a Hamilton-Jacobi-Bellman equation. The numerical examples are based on the code by Beck et al. in their papers "Solving the Kolmogorov PDE by means of deep learning" and "Deep splitting method for parabolic PDEs", and are written in the Julia programming language, with use of the Flux library for Machine Learning in Julia. The code used to implement the method can be found at https://github.com/julia-sand/pde_appro

    A new approach for solving nonlinear Thomas-Fermi equation based on fractional order of rational Bessel functions

    Full text link
    In this paper, the fractional order of rational Bessel functions collocation method (FRBC) to solve Thomas-Fermi equation which is defined in the semi-infinite domain and has singularity at x=0x = 0 and its boundary condition occurs at infinity, have been introduced. We solve the problem on semi-infinite domain without any domain truncation or transformation of the domain of the problem to a finite domain. This approach at first, obtains a sequence of linear differential equations by using the quasilinearization method (QLM), then at each iteration solves it by FRBC method. To illustrate the reliability of this work, we compare the numerical results of the present method with some well-known results in other to show that the new method is accurate, efficient and applicable
    corecore