392 research outputs found

    Iterated Posterior Linearization Smoother

    Get PDF
    This note considers the problem of Bayesian smoothing in nonlinear state-space models with additive noise using Gaussian approximations. Sigma-point approximations to the general Gaussian Rauch-Tung-Striebel smoother are widely used methods to tackle this problem. These algorithms perform statistical linear regression (SLR) of the nonlinear functions considering only the previous measurements. We argue that SLR should be done taking all measurements into account. We propose the iterated posterior linearization smoother (IPLS), which is an iterated algorithm that performs SLR of the nonlinear functions with respect to the current posterior approximation. The algorithm is demonstrated to outperform conventional Gaussian nonlinear smoothers in two numerical examples

    Iterated Filters for Nonlinear Transition Models

    Full text link
    A new class of iterated linearization-based nonlinear filters, dubbed dynamically iterated filters, is presented. Contrary to regular iterated filters such as the iterated extended Kalman filter (IEKF), iterated unscented Kalman filter (IUKF) and iterated posterior linearization filter (IPLF), dynamically iterated filters also take nonlinearities in the transition model into account. The general filtering algorithm is shown to essentially be a (locally over one time step) iterated Rauch-Tung-Striebel smoother. Three distinct versions of the dynamically iterated filters are especially investigated: analogues to the IEKF, IUKF and IPLF. The developed algorithms are evaluated on 25 different noise configurations of a tracking problem with a nonlinear transition model and linear measurement model, a scenario where conventional iterated filters are not useful. Even in this "simple" scenario, the dynamically iterated filters are shown to have superior root mean-squared error performance as compared with their respective baselines, the EKF and UKF. Particularly, even though the EKF diverges in 22 out of 25 configurations, the dynamically iterated EKF remains stable in 20 out of 25 scenarios, only diverging under high noise.Comment: 8 pages. Accepted to IEEE International Conference on Information Fusion 2023 (FUSION 2023). Copyright 2023 IEE

    Levenberg-Marquardt and Line-Search Extended Kalman Smoothers

    Get PDF
    The aim of this article is to present Levenberg–Marquardt and line-search extensions of the classical iterated extended Kalman smoother (IEKS) which has previously been shown to be equivalent to the Gauss–Newton method. The algo- rithms are derived by rewriting the algorithm’s steps in forms that can be efficiently implemented using modified EKS iter- ations. The resulting algorithms are experimentally shown to have superior convergence properties over the classical IEKS

    A Probabilistic State Space Model for Joint Inference from Differential Equations and Data

    Full text link
    Mechanistic models with differential equations are a key component of scientific applications of machine learning. Inference in such models is usually computationally demanding, because it involves repeatedly solving the differential equation. The main problem here is that the numerical solver is hard to combine with standard inference techniques. Recent work in probabilistic numerics has developed a new class of solvers for ordinary differential equations (ODEs) that phrase the solution process directly in terms of Bayesian filtering. We here show that this allows such methods to be combined very directly, with conceptual and numerical ease, with latent force models in the ODE itself. It then becomes possible to perform approximate Bayesian inference on the latent force as well as the ODE solution in a single, linear complexity pass of an extended Kalman filter / smoother - that is, at the cost of computing a single ODE solution. We demonstrate the expressiveness and performance of the algorithm by training, among others, a non-parametric SIRD model on data from the COVID-19 outbreak.Comment: 12 pages (+ 5 pages appendix), 7 figures. In: Advances in Neural Information Processing Systems (NeurIPS 2021

    Optimization viewpoint on Kalman smoothing, with applications to robust and sparse estimation

    Full text link
    In this paper, we present the optimization formulation of the Kalman filtering and smoothing problems, and use this perspective to develop a variety of extensions and applications. We first formulate classic Kalman smoothing as a least squares problem, highlight special structure, and show that the classic filtering and smoothing algorithms are equivalent to a particular algorithm for solving this problem. Once this equivalence is established, we present extensions of Kalman smoothing to systems with nonlinear process and measurement models, systems with linear and nonlinear inequality constraints, systems with outliers in the measurements or sudden changes in the state, and systems where the sparsity of the state sequence must be accounted for. All extensions preserve the computational efficiency of the classic algorithms, and most of the extensions are illustrated with numerical examples, which are part of an open source Kalman smoothing Matlab/Octave package.Comment: 46 pages, 11 figure

    Maximum likelihood estimation of time series models: the Kalman filter and beyond

    Get PDF
    The purpose of this chapter is to provide a comprehensive treatment of likelihood inference for state space models. These are a class of time series models relating an observable time series to quantities called states, which are characterized by a simple temporal dependence structure, typically a first order Markov process. The states have sometimes substantial interpretation. Key estimation problems in economics concern latent variables, such as the output gap, potential output, the non-accelerating-inflation rate of unemployment, or NAIRU, core inflation, and so forth. Time-varying volatility, which is quintessential to finance, is an important feature also in macroeconomics. In the multivariate framework relevant features can be common to different series, meaning that the driving forces of a particular feature and/or the transmission mechanism are the same. The objective of this chapter is reviewing this algorithm and discussing maximum likelihood inference, starting from the linear Gaussian case and discussing the extensions to a nonlinear and non Gaussian framework
    • …
    corecore