7,024 research outputs found

    A Bayesian framework for optimal motion planning with uncertainty

    Get PDF
    Modeling robot motion planning with uncertainty in a Bayesian framework leads to a computationally intractable stochastic control problem. We seek hypotheses that can justify a separate implementation of control, localization and planning. In the end, we reduce the stochastic control problem to path- planning in the extended space of poses x covariances; the transitions between states are modeled through the use of the Fisher information matrix. In this framework, we consider two problems: minimizing the execution time, and minimizing the final covariance, with an upper bound on the execution time. Two correct and complete algorithms are presented. The first is the direct extension of classical graph-search algorithms in the extended space. The second one is a back-projection algorithm: uncertainty constraints are propagated backward from the goal towards the start state

    A Tractable State-Space Model for Symmetric Positive-Definite Matrices

    Get PDF
    Bayesian analysis of state-space models includes computing the posterior distribution of the system's parameters as well as filtering, smoothing, and predicting the system's latent states. When the latent states wander around Rn\mathbb{R}^n there are several well-known modeling components and computational tools that may be profitably combined to achieve these tasks. However, there are scenarios, like tracking an object in a video or tracking a covariance matrix of financial assets returns, when the latent states are restricted to a curve within Rn\mathbb{R}^n and these models and tools do not immediately apply. Within this constrained setting, most work has focused on filtering and less attention has been paid to the other aspects of Bayesian state-space inference, which tend to be more challenging. To that end, we present a state-space model whose latent states take values on the manifold of symmetric positive-definite matrices and for which one may easily compute the posterior distribution of the latent states and the system's parameters, in addition to filtered distributions and one-step ahead predictions. Deploying the model within the context of finance, we show how one can use realized covariance matrices as data to predict latent time-varying covariance matrices. This approach out-performs factor stochastic volatility.Comment: 22 pages: 16 pages main manuscript, 4 pages appendix, 2 pages reference

    Moment matching versus Bayesian estimation: Backward-looking behaviour in the new-Keynesian three-equations model

    Get PDF
    The paper considers an elementary New-Keynesian three-equations model and contrasts its Bayesian estimation with the results from the method of moments (MM), which seeks to match the model-generated second moments of inflation, output and the interest rate to their empirical counterparts. Special emphasis is placed on the degree of backward-looking behaviour in the Phillips curve. While, in line with much of the literature, it only plays a marginal role in the Bayesian estimations, MM yields values of the price indexation parameter close to or even at its maximal value of one. These results are worth noticing since the matching thus achieved is entirely satisfactory. The matching of some special (and even better) versions of the model is econometrically evaluated by a model comparison test. --inflation persistence,autocovariance profiles,goodness-of-fit,model comparison

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models
    corecore