1,545 research outputs found

    Joint Smoothing, Tracking, and Forecasting Based on Continuous-Time Target Trajectory Fitting

    Full text link
    We present a continuous time state estimation framework that unifies traditionally individual tasks of smoothing, tracking, and forecasting (STF), for a class of targets subject to smooth motion processes, e.g., the target moves with nearly constant acceleration or affected by insignificant noises. Fundamentally different from the conventional Markov transition formulation, the state process is modeled by a continuous trajectory function of time (FoT) and the STF problem is formulated as an online data fitting problem with the goal of finding the trajectory FoT that best fits the observations in a sliding time-window. Then, the state of the target, whether the past (namely, smoothing), the current (filtering) or the near-future (forecasting), can be inferred from the FoT. Our framework releases stringent statistical modeling of the target motion in real time, and is applicable to a broad range of real world targets of significance such as passenger aircraft and ships which move on scheduled, (segmented) smooth paths but little statistical knowledge is given about their real time movement and even about the sensors. In addition, the proposed STF framework inherits the advantages of data fitting for accommodating arbitrary sensor revisit time, target maneuvering and missed detection. The proposed method is compared with state of the art estimators in scenarios of either maneuvering or non-maneuvering target.Comment: 16 pages, 8 figures, 5 tables, 80 references; Codes availabl

    Bayesian inference of time varying parameters in autoregressive processes

    Full text link
    In the autoregressive process of first order AR(1), a homogeneous correlated time series utu_t is recursively constructed as ut=q  ut−1+σ  ϵtu_t = q\; u_{t-1} + \sigma \;\epsilon_t, using random Gaussian deviates ϵt\epsilon_t and fixed values for the correlation coefficient qq and for the noise amplitude σ\sigma. To model temporally heterogeneous time series, the coefficients qtq_t and σt\sigma_t can be regarded as time-dependend variables by themselves, leading to the time-varying autoregressive processes TVAR(1). We assume here that the time series utu_t is known and attempt to infer the temporal evolution of the 'superstatistical' parameters qtq_t and σt\sigma_t. We present a sequential Bayesian method of inference, which is conceptually related to the Hidden Markov model, but takes into account the direct statistical dependence of successively measured variables utu_t. The method requires almost no prior knowledge about the temporal dynamics of qtq_t and σt\sigma_t and can handle gradual and abrupt changes of these superparameters simultaneously. We compare our method with a Maximum Likelihood estimate based on a sliding window and show that it is superior for a wide range of window sizes

    Combining Generative and Discriminative Models for Hybrid Inference

    Full text link
    A graphical model is a structured representation of the data generating process. The traditional method to reason over random variables is to perform inference in this graphical model. However, in many cases the generating process is only a poor approximation of the much more complex true data generating process, leading to suboptimal estimation. The subtleties of the generative process are however captured in the data itself and we can `learn to infer', that is, learn a direct mapping from observations to explanatory latent variables. In this work we propose a hybrid model that combines graphical inference with a learned inverse model, which we structure as in a graph neural network, while the iterative algorithm as a whole is formulated as a recurrent neural network. By using cross-validation we can automatically balance the amount of work performed by graphical inference versus learned inference. We apply our ideas to the Kalman filter, a Gaussian hidden Markov model for time sequences, and show, among other things, that our model can estimate the trajectory of a noisy chaotic Lorenz Attractor much more accurately than either the learned or graphical inference run in isolation

    Learning Hidden Markov Models for Linear Gaussian Systems with Applications to Event-based State Estimation

    Full text link
    This work attempts to approximate a linear Gaussian system with a finite-state hidden Markov model (HMM), which is found useful in solving sophisticated event-based state estimation problems. An indirect modeling approach is developed, wherein a state space model (SSM) is firstly identified for a Gaussian system and the SSM is then used as an emulator for learning an HMM. In the proposed method, the training data for the HMM are obtained from the data generated by the SSM through building a quantization mapping. Parameter learning algorithms are designed to learn the parameters of the HMM, through exploiting the periodical structural characteristics of the HMM. The convergence and asymptotic properties of the proposed algorithms are analyzed. The HMM learned using the proposed algorithms is applied to event-triggered state estimation, and numerical results on model learning and state estimation demonstrate the validity of the proposed algorithms.Comment: The manuscript is under review by a journa

    Dynamic Filtering of Time-Varying Sparse Signals via l1 Minimization

    Full text link
    Despite the importance of sparsity signal models and the increasing prevalence of high-dimensional streaming data, there are relatively few algorithms for dynamic filtering of time-varying sparse signals. Of the existing algorithms, fewer still provide strong performance guarantees. This paper examines two algorithms for dynamic filtering of sparse signals that are based on efficient l1 optimization methods. We first present an analysis for one simple algorithm (BPDN-DF) that works well when the system dynamics are known exactly. We then introduce a novel second algorithm (RWL1-DF) that is more computationally complex than BPDN-DF but performs better in practice, especially in the case where the system dynamics model is inaccurate. Robustness to model inaccuracy is achieved by using a hierarchical probabilistic data model and propagating higher-order statistics from the previous estimate (akin to Kalman filtering) in the sparse inference process. We demonstrate the properties of these algorithms on both simulated data as well as natural video sequences. Taken together, the algorithms presented in this paper represent the first strong performance analysis of dynamic filtering algorithms for time-varying sparse signals as well as state-of-the-art performance in this emerging application.Comment: 26 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:1208.032

    Supervised and Unsupervised Speech Enhancement Using Nonnegative Matrix Factorization

    Full text link
    Reducing the interference noise in a monaural noisy speech signal has been a challenging task for many years. Compared to traditional unsupervised speech enhancement methods, e.g., Wiener filtering, supervised approaches, such as algorithms based on hidden Markov models (HMM), lead to higher-quality enhanced speech signals. However, the main practical difficulty of these approaches is that for each noise type a model is required to be trained a priori. In this paper, we investigate a new class of supervised speech denoising algorithms using nonnegative matrix factorization (NMF). We propose a novel speech enhancement method that is based on a Bayesian formulation of NMF (BNMF). To circumvent the mismatch problem between the training and testing stages, we propose two solutions. First, we use an HMM in combination with BNMF (BNMF-HMM) to derive a minimum mean square error (MMSE) estimator for the speech signal with no information about the underlying noise type. Second, we suggest a scheme to learn the required noise BNMF model online, which is then used to develop an unsupervised speech enhancement system. Extensive experiments are carried out to investigate the performance of the proposed methods under different conditions. Moreover, we compare the performance of the developed algorithms with state-of-the-art speech enhancement schemes using various objective measures. Our simulations show that the proposed BNMF-based methods outperform the competing algorithms substantially

    Earthquake Forecasting Based on Data Assimilation: Sequential Monte Carlo Methods for Renewal Processes

    Full text link
    In meteorology, engineering and computer sciences, data assimilation is routinely employed as the optimal way to combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than can be achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant to the seismic gap hypothesis, models of characteristic earthquakes and to recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating arbitrary posterior distributions. We perform extensive numerical simulations to demonstrate the feasibility and benefits of forecasting earthquakes based on data assimilation. In particular, we show that the forecasts based on the Optimal Sampling Importance Resampling (OSIR) particle filter are significantly better than those of a benchmark forecast that ignores uncertainties in the observed event times. We use the marginal data likelihood, a measure of the explanatory power of a model in the presence of data errors, to estimate parameters and compare models.Comment: 55 pages, 15 figure

    Kalman filter with impulse noised outliers : A robust sequential algorithm to filter data with a large number of outliers

    Full text link
    Impulsed noise outliers are data points that differs significantly from other observations.They are generally removed from the data set through local regression or Kalman filter algorithm.However, these methods, or their generalizations, are not well suited when the number of outliers is ofthe same order as the number of low-noise data. In this article, we propose a new model for impulsenoised outliers based on simple latent linear Gaussian processes as in the Kalman Filter. We present a fastforward-backward algorithm to filter and smooth sequential data and which also detect these outliers.We compare the robustness and efficiency of this algorithm with classical methods. Finally, we applythis method on a real data set from a Walk Over Weighing system admitting around 60% of outliers. Forthis application, we further develop an (explicit) EM algorithm to calibrate some algorithm parameters

    Partially Linear Estimation with Application to Sparse Signal Recovery From Measurement Pairs

    Full text link
    We address the problem of estimating a random vector X from two sets of measurements Y and Z, such that the estimator is linear in Y. We show that the partially linear minimum mean squared error (PLMMSE) estimator does not require knowing the joint distribution of X and Y in full, but rather only its second-order moments. This renders it of potential interest in various applications. We further show that the PLMMSE method is minimax-optimal among all estimators that solely depend on the second-order statistics of X and Y. We demonstrate our approach in the context of recovering a signal, which is sparse in a unitary dictionary, from noisy observations of it and of a filtered version of it. We show that in this setting PLMMSE estimation has a clear computational advantage, while its performance is comparable to state-of-the-art algorithms. We apply our approach both in static and dynamic estimation applications. In the former category, we treat the problem of image enhancement from blurred/noisy image pairs, where we show that PLMMSE estimation performs only slightly worse than state-of-the art algorithms, while running an order of magnitude faster. In the dynamic setting, we provide a recursive implementation of the estimator and demonstrate its utility in the context of tracking maneuvering targets from position and acceleration measurements.Comment: 13 pages, 5 figure
    • …
    corecore