51,521 research outputs found

    Bayesian topology identification of linear dynamic networks

    Full text link
    In networks of dynamic systems, one challenge is to identify the interconnection structure on the basis of measured signals. Inspired by a Bayesian approach in [1], in this paper, we explore a Bayesian model selection method for identifying the connectivity of networks of transfer functions, without the need to estimate the dynamics. The algorithm employs a Bayesian measure and a forward-backward search algorithm. To obtain the Bayesian measure, the impulse responses of network modules are modeled as Gaussian processes and the hyperparameters are estimated by marginal likelihood maximization using the expectation-maximization algorithm. Numerical results demonstrate the effectiveness of this method

    On transient dynamics, off-equilibrium behaviour and identification in blended multiple model structures

    Get PDF
    The use of multiple-model techniques has been reported in a variety of control and signal processing applications. However, several theoretical analyses have recently appeared which outline fundamental limitations of these techniques in certain domains of application. In particular, the identifiability and interpretability of local linear model parameters in transient operating regimes is shown to be limited. Some modifications to the basic paradigm are suggested which overcome a number of problems. As an alternative to parametric identification of blended multiple model structures, nonparametric Gaussian process priors are suggested as a means of providing local models, and the results compared to a multiple-model approach in a Monte Carlo simulation on some simulated vehicle dynamics data

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models
    • …
    corecore