32,214 research outputs found

    On Adaptive Multiple-Shooting Method for Stochastic Multi-Point Boundary Value Problems

    Full text link
    This paper presents an adaptive multiple-shooting method to solve stochastic multi-point boundary value problems. The heuristic to choose the shooting points is based on separating the effects of drift and diffusion terms and comparing the corresponding solution components with a pre-specified initial approximation. Having obtained the mesh points, we solve the underlying stochastic differential equation on each shooting interval with a first-order strongly-convergent stochastic Runge-Kutta method. We illustrate the effectiveness of this approach on 1-dimentional and 2-dimentional test problems and compare our results with other non-adaptive alternative techniques proposed in the literature.Comment: 18 Pages, 2 figure

    Adaptive Weak Approximation of Diffusions with Jumps

    Full text link
    This work develops Monte Carlo Euler adaptive time stepping methods for the weak approximation problem of jump diffusion driven stochastic differential equations. The main result is the derivation of a new expansion for the omputational error, with computable leading order term in a posteriori form, based on stochastic flows and discrete dual backward problems which extends the results in [STZ]. These expansions lead to efficient and accurate computation of error estimates. Adaptive algorithms for either stochastic time steps or quasi-deterministic time steps are described. Numerical examples show the performance of the proposed error approximation and of the described adaptive time-stepping methods.Comment: 27 page

    Lower Error Bounds for Strong Approximation of Scalar SDEs with non-Lipschitzian Coefficients

    Full text link
    We study pathwise approximation of scalar stochastic differential equations at a single time point or globally in time by means of methods that are based on finitely many observations of the driving Brownian motion. We prove lower error bounds in terms of the average number of evaluations of the driving Brownian motion that hold for every such method under rather mild assumptions on the coefficients of the equation. The underlying simple idea of our analysis is as follows: the lower error bounds known for equations with coefficients that have sufficient regularity globally in space should still apply in the case of coefficients that have this regularity in space only locally, in a small neighborhood of the initial value. Our results apply to a huge variety of equations with coefficients that are not globally Lipschitz continuous in space including Cox-Ingersoll-Ross processes, equations with superlinearly growing coefficients, and equations with discontinuous coefficients. In many of these cases the resulting lower error bounds even turn out to be sharp

    Towards Automatic Global Error Control: Computable Weak Error Expansion for the Tau-Leap Method

    Full text link
    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms; a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie algorithm or the Stochastic simulation algorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term

    A Separation-based Approach to Data-based Control for Large-Scale Partially Observed Systems

    Full text link
    This paper studies the partially observed stochastic optimal control problem for systems with state dynamics governed by partial differential equations (PDEs) that leads to an extremely large problem. First, an open-loop deterministic trajectory optimization problem is solved using a black-box simulation model of the dynamical system. Next, a Linear Quadratic Gaussian (LQG) controller is designed for the nominal trajectory-dependent linearized system which is identified using input-output experimental data consisting of the impulse responses of the optimized nominal system. A computational nonlinear heat example is used to illustrate the performance of the proposed approach.Comment: arXiv admin note: text overlap with arXiv:1705.09761, arXiv:1707.0309

    The optimal free knot spline approximation of stochastic differential equations with additive noise

    Full text link
    In this paper we analyse the pathwise approximation of stochastic differential equations by polynomial splines with free knots. The pathwise distance between the solution and its approximation is measured globally on the unit interval in the L∞L_{\infty}-norm, and we study the expectation of this distance. For equations with additive noise we obtain sharp lower and upper bounds for the minimal error in the class of arbitrary spline approximation methods, which use kk free knots. The optimal order is achieved by an approximation method X^k†\hat{X}_{k}^{\dagger}, which combines an Euler scheme on a coarse grid with an optimal spline approximation of the Brownian motion WW with kk free knots.Comment: arXiv admin note: text overlap with arXiv:1306.445

    Stochastic Gradient Descent as Approximate Bayesian Inference

    Full text link
    Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results. (1) We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions. (2) We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models. (3) We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly. (4) We analyze MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally (5), we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.Comment: 35 pages, published version (JMLR 2017

    Numerical low-rank approximation of matrix differential equations

    Full text link
    The efficient numerical integration of large-scale matrix differential equations is a topical problem in numerical analysis and of great importance in many applications. Standard numerical methods applied to such problems require an unduly amount of computing time and memory, in general. Based on a dynamical low-rank approximation of the solution, a new splitting integrator is proposed for a quite general class of stiff matrix differential equations. This class comprises differential Lyapunov and differential Riccati equations that arise from spatial discretizations of partial differential equations. The proposed integrator handles stiffness in an efficient way, and it preserves the symmetry and positive semidefiniteness of solutions of differential Lyapunov equations. Numerical examples that illustrate the benefits of this new method are given. In particular, numerical results for the efficient simulation of the weather phenomenon El Ni\~no are presented

    An efficient, globally convergent method for optimization under uncertainty using adaptive model reduction and sparse grids

    Full text link
    This work introduces a new method to efficiently solve optimization problems constrained by partial differential equations (PDEs) with uncertain coefficients. The method leverages two sources of inexactness that trade accuracy for speed: (1) stochastic collocation based on dimension-adaptive sparse grids (SGs), which approximates the stochastic objective function with a limited number of quadrature nodes, and (2) projection-based reduced-order models (ROMs), which generate efficient approximations to PDE solutions. These two sources of inexactness lead to inexact objective function and gradient evaluations, which are managed by a trust-region method that guarantees global convergence by adaptively refining the sparse grid and reduced-order model until a proposed error indicator drops below a tolerance specified by trust-region convergence theory. A key feature of the proposed method is that the error indicator---which accounts for errors incurred by both the sparse grid and reduced-order model---must be only an asymptotic error bound, i.e., a bound that holds up to an arbitrary constant that need not be computed. This enables the method to be applicable to a wide range of problems, including those where sharp, computable error bounds are not available; this distinguishes the proposed method from previous works. Numerical experiments performed on a model problem from optimal flow control under uncertainty verify global convergence of the method and demonstrate the method's ability to outperform previously proposed alternatives.Comment: 27 pages, 6 figures, 1 tabl

    Model Error in Data Assimilation

    Full text link
    This chapter provides various perspective on an important challenge in data assimilation: model error. While the overall goal is to understand the implication of model error of any type in data assimilation, we emphasize on the effect of model error from unresolved scales. In particular, connection to related subjects under different names in applied mathematics, such as the Mori-Zwanzig formalism and the averaging method, were discussed with the hope that the existing methods can be more accessible and eventually be used appropriately. We will classify existing methods into two groups: the statistical methods for those who directly estimate the low-order model error statistics; and the stochastic parameterizations for those who implicitly estimate all statistics by imposing stochastic models beyond the traditional unbiased white noise Gaussian processes. We will provide theory to justify why stochastic parameterization, as one of the main theme in this book, is an adequate tool for mitigating model error in data assimilation. Finally, we will also discuss challenges in lifting this approach in general applications and provide an alternative nonparametric approach.Comment: This note is prepared for a chapter in "Nonlinear and Stochastic Climate Dynamics. Eds. C.L.E. Franzke and T.J. O'Kane, Cambridge University Pres
    • …
    corecore