181 research outputs found

    Robust Exponential Runge-Kutta Embedded Pairs

    Full text link
    Exponential integrators are explicit methods for solving ordinary differential equations that treat linear behaviour exactly. The stiff-order conditions for exponential integrators derived in a Banach space framework by Hochbruck and Ostermann are solved symbolically by expressing the Runge--Kutta weights as unknown linear combinations of phi functions. Of particular interest are embedded exponential pairs that efficiently generate both a high- and low-order estimate, allowing for dynamic adjustment of the time step. A key requirement is that the pair be robust: if the nonlinear source function has nonzero total time derivatives, the order of the low-order estimate should never exceed its design value. Robust exponential Runge--Kutta (3,2) and (4,3) embedded pairs that are well-suited to initial value problems with a dominant linearity are constructed.Comment: 24 pages, 8 figures. The Mathematica scripts mentioned in the paper can be found in: https://github.com/stiffode/expint

    Global error analysis and inertial manifold reduction

    Get PDF
    Four types of global error for initial value problems are considered in a common framework. They include classical forward error analysis and shadowing error analysis together with extensions of both to include rescaling of time. To determine the amplificatioh of the local error that bounds the global error we present a linear analysis similar in spirit to condition number estimation for linear systems of equations. We combine these ideas with techniques for dimension reduction of differential equations via a boundary value formulation of numerical inertial manifold reduction. These global error concepts are exercised to illustrate their utility on the Lorenz equations and inertial manifold reductions of the Kuramoto-Sivashinsky equation. (C) 2016 Elsevier B.V. All rights reserved

    Dissipative numerical schemes on Riemannian manifolds with applications to gradient flows

    Full text link
    This paper concerns an extension of discrete gradient methods to finite-dimensional Riemannian manifolds termed discrete Riemannian gradients, and their application to dissipative ordinary differential equations. This includes Riemannian gradient flow systems which occur naturally in optimization problems. The Itoh--Abe discrete gradient is formulated and applied to gradient systems, yielding a derivative-free optimization algorithm. The algorithm is tested on two eigenvalue problems and two problems from manifold valued imaging: InSAR denoising and DTI denoising.Comment: Post-revision version. To appear in SIAM Journal on Scientific Computin

    Order reduction methods for solving large-scale differential matrix Riccati equations

    Full text link
    We consider the numerical solution of large-scale symmetric differential matrix Riccati equations. Under certain hypotheses on the data, reduced order methods have recently arisen as a promising class of solution strategies, by forming low-rank approximations to the sought after solution at selected timesteps. We show that great computational and memory savings are obtained by a reduction process onto rational Krylov subspaces, as opposed to current approaches. By specifically addressing the solution of the reduced differential equation and reliable stopping criteria, we are able to obtain accurate final approximations at low memory and computational requirements. This is obtained by employing a two-phase strategy that separately enhances the accuracy of the algebraic approximation and the time integration. The new method allows us to numerically solve much larger problems than in the current literature. Numerical experiments on benchmark problems illustrate the effectiveness of the procedure with respect to existing solvers

    Generalised Langevin equation: asymptotic properties and numerical analysis

    Get PDF
    In this thesis we concentrate on instances of the GLE which can be represented in a Markovian form in an extended phase space. We extend previous results on the geometric ergodicity of this class of GLEs using Lyapunov techniques, which allows us to conclude ergodicity for a large class of GLEs relevant to molecular dynamics applications. The main body of this thesis concerns the numerical discretisation of the GLE in the extended phase space representation. We generalise numerical discretisation schemes which have been previously proposed for the underdamped Langevin equation and which are based on a decomposition of the vector field into a Hamiltonian part and a linear SDE. Certain desirable properties regarding the accuracy of configurational averages of these schemes are inherited in the GLE context. We also rigorously prove geometric ergodicity on bounded domains by showing that a uniform minorisation condition and a uniform Lyapunov condition are satisfied for sufficiently small timestep size. We show that the discretisation schemes which we propose behave consistently in the white noise and overdamped limits, hence we provide a family of universal integrators for Langevin dynamics. Finally, we consider multiple-time stepping schemes making use of a decomposition of the fluctuation-dissipation term into a reversible and non-reversible part. These schemes are designed to efficiently integrate instances of the GLE whose Markovian representation involves a high number of auxiliary variables or a configuration dependent fluctuation-dissipation term. We also consider an application of dynamics based on the GLE in the context of large scale Bayesian inference as an extension of previously proposed adaptive thermostat methods. In these methods the gradient of the log posterior density is only evaluated on a subset (minibatch) of the whole dataset, which is randomly selected at each timestep. Incorporating a memory kernel in the adaptive thermostat formulation ensures that time-correlated gradient noise is dissipated in accordance with the fluctuation-dissipation theorem. This allows us to relax the requirement of using i.i.d. minibatches, and explore a variety of minibatch sampling approaches
    corecore