980 research outputs found

    Convergence Rates of Gaussian ODE Filters

    Get PDF
    A recently-introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems. These methods model the true solution xx and its first qq derivatives \emph{a priori} as a Gauss--Markov process X\boldsymbol{X}, which is then iteratively conditioned on information about x˙\dot{x}. This article establishes worst-case local convergence rates of order q+1q+1 for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order qq in the case of q=1q=1 and an integrated Brownian motion prior, and analyses how inaccurate information on x˙\dot{x} coming from approximate evaluations of ff affects these rates. Moreover, we show that, in the globally convergent case, the posterior credible intervals are well calibrated in the sense that they globally contract at the same rate as the truncation error. We illustrate these theoretical results by numerical experiments which might indicate their generalizability to q∈{2,3,… }q \in \{2,3,\dots\}.Comment: 26 pages, 5 figure

    Probabilistic ODE Solvers with Runge-Kutta Means

    Full text link
    Runge-Kutta methods are the classic family of solvers for ordinary differential equations (ODEs), and the basis for the state of the art. Like most numerical methods, they return point estimates. We construct a family of probabilistic numerical methods that instead return a Gauss-Markov process defining a probability distribution over the ODE solution. In contrast to prior work, we construct this family such that posterior means match the outputs of the Runge-Kutta family exactly, thus inheriting their proven good properties. Remaining degrees of freedom not identified by the match to Runge-Kutta are chosen such that the posterior probability measure fits the observed structure of the ODE. Our results shed light on the structure of Runge-Kutta solvers from a new direction, provide a richer, probabilistic output, have low computational cost, and raise new research questions.Comment: 18 pages (9 page conference paper, plus supplements); appears in Advances in Neural Information Processing Systems (NIPS), 201

    The adaptive patched cubature filter and its implementation

    Full text link
    There are numerous contexts where one wishes to describe the state of a randomly evolving system. Effective solutions combine models that quantify the underlying uncertainty with available observational data to form scientifically reasonable estimates for the uncertainty in the system state. Stochastic differential equations are often used to mathematically model the underlying system. The Kusuoka-Lyons-Victoir (KLV) approach is a higher order particle method for approximating the weak solution of a stochastic differential equation that uses a weighted set of scenarios to approximate the evolving probability distribution to a high order of accuracy. The algorithm can be performed by integrating along a number of carefully selected bounded variation paths. The iterated application of the KLV method has a tendency for the number of particles to increase. This can be addressed and, together with local dynamic recombination, which simplifies the support of discrete measure without harming the accuracy of the approximation, the KLV method becomes eligible to solve the filtering problem in contexts where one desires to maintain an accurate description of the ever-evolving conditioned measure. In addition to the alternate application of the KLV method and recombination, we make use of the smooth nature of the likelihood function and high order accuracy of the approximations to lead some of the particles immediately to the next observation time and to build into the algorithm a form of automatic high order adaptive importance sampling.Comment: to appear in Communications in Mathematical Sciences. arXiv admin note: substantial text overlap with arXiv:1311.675
    • …
    corecore