123 research outputs found

    Cooperative multi-sensor tracking of vulnerable road users in the presence of missing detections

    Get PDF
    This paper presents a vulnerable road user (VRU) tracking algorithm capable of handling noisy and missing detections from heterogeneous sensors. We propose a cooperative fusion algorithm for matching and reinforcing of radar and camera detections using their proximity and positional uncertainty. The belief in the existence and position of objects is then maximized by temporal integration of fused detections by a multi-object tracker. By switching between observation models, the tracker adapts to the detection noise characteristics making it robust to individual sensor failures. The main novelty of this paper is an improved imputation sampling function for updating the state when detections are missing. The proposed function uses a likelihood without association that is conditioned on the sensor information instead of the sensor model. The benefits of the proposed solution are two-fold: firstly, particle updates become computationally tractable and secondly, the problem of imputing samples from a state which is predicted without an associated detection is bypassed. Experimental evaluation shows a significant improvement in both detection and tracking performance over multiple control algorithms. In low light situations, the cooperative fusion outperforms intermediate fusion by as much as 30%, while increases in tracking performance are most significant in complex traffic scenes

    Multiple-imputation-particle-filtering for Uncertainty Characterization in Battery State-of-Charge Estimation Problems with Missing Measurement Data: Performance Analysis and Impact on Prognostic Algorithms

    Get PDF
    The implementation of particle-filtering-based algorithms for state estimation purposes often has to deal with the problem of missing observations. An efficient design requires an appropriate methodology for real-time uncertainty characterization within the estimation process, incorporating knowledge from other available sources of information. This article analyzes this problem and presents preliminary results for a multiple imputation strategy that improves the performance of particle-filtering-based state-of-charge (SOC) estimators for lithium-ion (Li-Ion) battery cells. The proposed uncertainty characterization scheme is tested, and validated, in a case study where the state-space model requires both voltage and discharge current measurements to estimate the SOC. A sudden disconnection of the battery voltage sensor is assumed to cause significant loss of data. Results show that the multipleimputation particle filter allows reasonable characterization of uncertainty bounds for state estimates, even when the voltage sensor disconnection continues. Furthermore, if voltage measurements are once more available, the uncertainty bounds adjust to levels that are comparable to the case where data were not lost. As state estimates are used as initial conditions for battery End-of-Discharge (EoD) prognosis modules, we also studied how these multiple-imputation algorithms impact on the quality of EoD estimates

    On particle filters applied to electricity load forecasting

    Get PDF
    We are interested in the online prediction of the electricity load, within the Bayesian framework of dynamic models. We offer a review of sequential Monte Carlo methods, and provide the calculations needed for the derivation of so-called particles filters. We also discuss the practical issues arising from their use, and some of the variants proposed in the literature to deal with them, giving detailed algorithms whenever possible for an easy implementation. We propose an additional step to help make basic particle filters more robust with regard to outlying observations. Finally we use such a particle filter to estimate a state-space model that includes exogenous variables in order to forecast the electricity load for the customers of the French electricity company \'Electricit\'e de France and discuss the various results obtained

    Particle filtering in high-dimensional chaotic systems

    Full text link
    We present an efficient particle filtering algorithm for multiscale systems, that is adapted for simple atmospheric dynamics models which are inherently chaotic. Particle filters represent the posterior conditional distribution of the state variables by a collection of particles, which evolves and adapts recursively as new information becomes available. The difference between the estimated state and the true state of the system constitutes the error in specifying or forecasting the state, which is amplified in chaotic systems that have a number of positive Lyapunov exponents. The purpose of the present paper is to show that the homogenization method developed in Imkeller et al. (2011), which is applicable to high dimensional multi-scale filtering problems, along with important sampling and control methods can be used as a basic and flexible tool for the construction of the proposal density inherent in particle filtering. Finally, we apply the general homogenized particle filtering algorithm developed here to the Lorenz'96 atmospheric model that mimics mid-latitude atmospheric dynamics with microscopic convective processes.Comment: 28 pages, 12 figure

    Complex Data Imputation by Auto-Encoders and Convolutional Neural Networks—A Case Study on Genome Gap-Filling

    Get PDF
    Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle \u201ccomplex data\u201d, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences

    Single camera pose estimation using Bayesian filtering and Kinect motion priors

    Full text link
    Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints through priors obtained directly from training data. A prior distribution covering the probability of a human pose occurring is used to incorporate likely human poses. This distribution is obtained offline, by fitting a Gaussian mixture model to a large dataset of recorded human body poses, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave as random walks, but drift towards a set of typically observed poses. This model is combined with measurements of the human head and hand positions, using recursive Bayesian estimation to incorporate temporal information. Measurements are obtained using face detection and a simple skin colour hand detector, trained using the detected face. The suggested model is designed with analytical tractability in mind and we show that the pose tracking can be Rao-Blackwellised using the mixture Kalman filter, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. In addition, the use of the proposed upper body model allows reliable three-dimensional pose estimates to be obtained indirectly for a number of joints that are often difficult to detect using traditional object recognition strategies. Comparisons with Kinect sensor results and the state of the art in 2D pose estimation highlight the efficacy of the proposed approach.Comment: 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014 conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video: https://www.youtube.com/watch?v=dJMTSo7-uF

    A new algorithm for prognostics using subset simulation

    Get PDF
    This work presents an efficient computational framework for prognostics by combining the particle filter-based prognostics principles with the technique of Subset Simulation, first developed in S.K. Au and J.L. Beck [Probabilistic Engrg. Mech., 16 (2001), pp. 263-277], which has been named PFP-SubSim. The idea behind PFP-SubSim algorithm is to split the multi-step-ahead predicted trajectories into multiple branches of selected samples at various stages of the process, which correspond to increasingly closer approximations of the critical threshold. Following theoretical development, discussion and an illustrative example to demonstrate its efficacy, we report on experience using the algorithm for making predictions for the end-of-life and remaining useful life in the challenging application of fatigue damage propagation of carbon-fibre composite coupons using structural health monitoring data. Results show that PFP-SubSim algorithm outperforms the traditional particle filter-based prognostics approach in terms of computational efficiency, while achieving the same, or better, measure of accuracy in the prognostics estimates. It is also shown that PFP-SubSim algorithm gets its highest efficiency when dealing with rare-event simulation
    • …
    corecore