3,105 research outputs found

    Maximum likelihood smoothing estimation in state-space models: An incomplete-information based approach

    Full text link
    This paper revisits classical works of Rauch (1963, et al. 1965) and develops a novel method for maximum likelihood (ML) smoothing estimation from incomplete information/data of stochastic state-space systems. Score function and conditional observed information matrices of incomplete data are introduced and their distributional identities are established. Using these identities, the ML smoother x^kns=arg maxxklogf(xk,x^k+1ns,y0:nθ)\widehat{x}_{k\vert n}^s =\argmax_{x_k} \log f(x_k,\widehat{x}_{k+1\vert n}^s, y_{0:n}\vert\theta), kn1k\leq n-1, is presented. The result shows that the ML smoother gives an estimate of state xkx_k with more adherence of loglikehood having less standard errors than that of the ML state estimator x^k=arg maxxklogf(xk,y0:kθ)\widehat{x}_k=\argmax_{x_k} \log f(x_k,y_{0:k}\vert\theta), with x^nns=x^n\widehat{x}_{n\vert n}^s=\widehat{x}_n. Recursive estimation is given in terms of an EM-gradient-particle algorithm which extends the work of \cite{Lange} for ML smoothing estimation. The algorithm has an explicit iteration update which lacks in (\cite{Ramadan}) EM-algorithm for smoothing. A sequential Monte Carlo method is developed for valuation of the score function and observed information matrices. A recursive equation for the covariance matrix of estimation error is developed to calculate the standard errors. In the case of linear systems, the method shows that the Rauch-Tung-Striebel (RTS) smoother is a fully efficient smoothing state-estimator whose covariance matrix coincides with the Cram\'er-Rao lower bound, the inverse of expected information matrix. Furthermore, the RTS smoother coincides with the Kalman filter having less covariance matrix. Numerical studies are performed, confirming the accuracy of the main results.Comment: 3 figure

    Random finite sets in multi-target tracking - efficient sequential MCMC implementation

    Get PDF
    Over the last few decades multi-target tracking (MTT) has proved to be a challenging and attractive research topic. MTT applications span a wide variety of disciplines, including robotics, radar/sonar surveillance, computer vision and biomedical research. The primary focus of this dissertation is to develop an effective and efficient multi-target tracking algorithm dealing with an unknown and time-varying number of targets. The emerging and promising Random Finite Set (RFS) framework provides a rigorous foundation for optimal Bayes multi-target tracking. In contrast to traditional approaches, the collection of individual targets is treated as a set-valued state. The intent of this dissertation is two-fold; first to assert that the RFS framework not only is a natural, elegant and rigorous foundation, but also leads to practical, efficient and reliable algorithms for Bayesian multi-target tracking, and second to provide several novel RFS based tracking algorithms suitable for the specific Track-Before-Detect (TBD) surveillance application. One main contribution of this dissertation is a rigorous derivation and practical implementation of a novel algorithm well suited to deal with multi-target tracking problems for a given cardinality. The proposed Interacting Population-based MCMC-PF algorithm makes use of several Metropolis-Hastings samplers running in parallel, which interact through genetic variation. Another key contribution concerns the design and implementation of two novel algorithms to handle a varying number of targets. The first approach exploits Reversible Jumps. The second approach is built upon the concepts of labeled RFSs and multiple cardinality hypotheses. The performance of the proposed algorithms is also demonstrated in practical scenarios, and shown to significantly outperform conventional multi-target PF in terms of track accuracy and consistency. The final contribution seeks to exploit external information to increase the performance of the surveillance system. In multi-target scenarios, kinematic constraints from the interaction of targets with their environment or other targets can restrict target motion. Such motion constraint information is integrated by using a fixed-lag smoothing procedure, named Knowledge-Based Fixed-Lag Smoother (KB-Smoother). The proposed combination IP-MCMC-PF/KB-Smoother yields enhanced tracking

    An ensemble framework for time delay synchronisation

    Get PDF
    Synchronisation theory is based on a method that tries to synchronise a model with the true evolution of a system via the observations. In practice, an extra term is added to the model equations that hampers growth of instabilities transversal to the synchronisation manifold. Therefore, there is a very close connection between synchronisation and data assimilation. Recently, synchronisation with time delayed observations has been proposed, in which observations at future times are used to help synchronise a system that does not synchronise using only present observations, with remarkable successes. Unfortunately, these schemes are limited to small-dimensional problems. In this paper, we lift that restriction by proposing ensemble-based synchronisation scheme. Tests were performed using Lorenz96 model for 20, 100 and 1000-dimension systems. Results show global synchronisation errors stabilising at values of at least an order of magnitude lower than the observation errors, suggesting that the scheme is a promising tool to steer model states to the truth. While this framework is not a complete data assimilation method, we develop this methodology as a potential choice for a proposal density in a more comprehensive data assimilation method, like a fully nonlinear particle filter

    Nonlinear data assimilation using synchronisation in a particle filter

    Get PDF
    Current data assimilation methodologies still face problems in strongly nonlinear systems. Particle filters are a promising solution, providing a representation of the model probability density function (pdf) by a discrete set of particles. To allow a particle filter to work in high-dimensional systems, the proposal density freedom is a useful tool to be explored. A potential choice of proposal density might come from the synchronisation theory, in which one tries to synchronise the model with the true evolution of a system using one-way coupling, via the observations, by adding an extra term to the model equations that will control the growth of instabilities transversal to the synchronisation manifold. Efficient synchronisation is possible in low-dimensional systems, but these schemes are not well suited for high-dimensional settings. The first part of this thesis introduces a new scheme: the ensemble-based synchronisation, that can handle high-dimensional systems. A detailed description of the formulation is presented and extensive experiments in the nonlinear Lorenz96 model are performed. Successful results are obtained and an analysis of the usefulness of the scheme is made, bringing inspiration for a powerful combination with a particle filter. In the second part, the ensemble synhronisation scheme is used as a proposal density in two different particle filters: the Implicit Equal-Weights Particle Filter and the Equivalent-Weights Particle Filter. Both methodologies avoid filter degeneracy by construction. The formulation proposed and its implementation are described in detail. Tests using the Lorenz96 model for a 1000-dimensional system show qualitatively reasonable results, where particles follow the truth, both for observed and unobserved variables. Further tests in the 2-D barotropic vorticity model were also performed for a grid of up to 16,384 variables, also showing successful results, where the estimated errors are consistent with the true errors. The behavior of the two schemes is described and their advantages and issues exposed, as this is the first comparison ever made between both filters. The overall message is that results suggest that the combination of the ensemble synchronisation with a particle filter is a promising solution for high-dimensional nonlinear problems in the geosciences, connecting the synchronisation field to data assimilation in a very direct way

    Long-Term Localization for Self-Driving Cars

    Get PDF
    Long-term localization is hard due to changing conditions, while relative localization within time sequences is much easier. To achieve long-term localization in a sequential setting, such as, for self-driving cars, relative localization should be used to the fullest extent, whenever possible.This thesis presents solutions and insights both for long-term sequential visual localization, and localization using global navigational satellite systems (GNSS), that push us closer to the goal of accurate and reliable localization for self-driving cars. It addresses the question: How to achieve accurate and robust, yet cost-effective long-term localization for self-driving cars?Starting in this question, the thesis explores how existing sensor suites for advanced driver-assistance systems (ADAS) can be used most efficiently, and how landmarks in maps can be recognized and used for localization even after severe changes in appearance. The findings show that:* State-of-the-art ADAS sensors are insufficient to meet the requirements for localization of a self-driving car in less than ideal conditions.GNSS and visual localization are identified as areas to improve.\ua0* Highly accurate relative localization with no convergence delay is possible by using time relative GNSS observations with a single band receiver, and no base stations.\ua0* Sequential semantic localization is identified as a promising focus point for further research based on a benchmark study comparing state-of-the-art visual localization methods in challenging autonomous driving scenarios including day-to-night and seasonal changes.\ua0* A novel sequential semantic localization algorithm improves accuracy while significantly reducing map size compared to traditional methods based on matching of local image features.\ua0* Improvements for semantic segmentation in challenging conditions can be made efficiently by automatically generating pixel correspondences between images from a multitude of conditions and enforcing a consistency constraint during training.\ua0* A segmentation algorithm with automatically defined and more fine-grained classes improves localization performance.\ua0* The performance advantage seen in single image localization for modern local image features, when compared to traditional ones, is all but erased when considering sequential data with odometry, thus, encouraging to focus future research more on sequential localization, rather than pure single image localization
    corecore