14,277 research outputs found

    Online Drift Compensation for Chemical Sensors Using Estimation Theory

    Get PDF
    Sensor drift from slowly changing environmental conditions and other instabilities can greatly degrade a chemical sensor\u27s performance, resulting in poor identification and analyte quantification. In the present work, estimation theory (i.e., various forms of the Kalman filter) is used for online compensation of baseline drift in the response of chemical sensors. Two different cases, which depend on the knowledge of the characteristics of the sensor system, are studied. First, an unknown input is considered, which represents the practical case of analyte detection and quantification. Then, the more general case, in which the sensor parameters and the input are both unknown, is studied. The techniques are applied to simulated sensor data, for which the true baseline and response are known, and to actual liquid-phase SH-SAW sensor data measured during the detection of organophosphates. It is shown that the technique is capable of estimating the baseline signal and recovering the true sensor signal due only to the presence of the analyte. This is true even when the baseline drift changes rate or direction during the detection process or when the analyte is not completely flushed from the system

    Diffusion Maps Kalman Filter for a Class of Systems with Gradient Flows

    Full text link
    In this paper, we propose a non-parametric method for state estimation of high-dimensional nonlinear stochastic dynamical systems, which evolve according to gradient flows with isotropic diffusion. We combine diffusion maps, a manifold learning technique, with a linear Kalman filter and with concepts from Koopman operator theory. More concretely, using diffusion maps, we construct data-driven virtual state coordinates, which linearize the system model. Based on these coordinates, we devise a data-driven framework for state estimation using the Kalman filter. We demonstrate the strengths of our method with respect to both parametric and non-parametric algorithms in three tracking problems. In particular, applying the approach to actual recordings of hippocampal neural activity in rodents directly yields a representation of the position of the animals. We show that the proposed method outperforms competing non-parametric algorithms in the examined stochastic problem formulations. Additionally, we obtain results comparable to classical parametric algorithms, which, in contrast to our method, are equipped with model knowledge.Comment: 15 pages, 12 figures, submitted to IEEE TS

    Sequential Bayesian inference for static parameters in dynamic state space models

    Full text link
    A method for sequential Bayesian inference of the static parameters of a dynamic state space model is proposed. The method is based on the observation that many dynamic state space models have a relatively small number of static parameters (or hyper-parameters), so that in principle the posterior can be computed and stored on a discrete grid of practical size which can be tracked dynamically. Further to this, this approach is able to use any existing methodology which computes the filtering and prediction distributions of the state process. Kalman filter and its extensions to non-linear/non-Gaussian situations have been used in this paper. This is illustrated using several applications: linear Gaussian model, Binomial model, stochastic volatility model and the extremely non-linear univariate non-stationary growth model. Performance has been compared to both existing on-line method and off-line methods

    Online Natural Gradient as a Kalman Filter

    Full text link
    We cast Amari's natural gradient in statistical learning as a specific case of Kalman filtering. Namely, applying an extended Kalman filter to estimate a fixed unknown parameter of a probabilistic model from a series of observations, is rigorously equivalent to estimating this parameter via an online stochastic natural gradient descent on the log-likelihood of the observations. In the i.i.d. case, this relation is a consequence of the "information filter" phrasing of the extended Kalman filter. In the recurrent (state space, non-i.i.d.) case, we prove that the joint Kalman filter over states and parameters is a natural gradient on top of real-time recurrent learning (RTRL), a classical algorithm to train recurrent models. This exact algebraic correspondence provides relevant interpretations for natural gradient hyperparameters such as learning rates or initialization and regularization of the Fisher information matrix.Comment: 3rd version: expanded intr

    Bibliographic Review on Distributed Kalman Filtering

    Get PDF
    In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area
    corecore