48,923 research outputs found
Bayesian Nonparametric Inference of Switching Linear Dynamical Systems
Many complex dynamical phenomena can be effectively modeled by a system that
switches among a set of conditionally linear dynamical modes. We consider two
such models: the switching linear dynamical system (SLDS) and the switching
vector autoregressive (VAR) process. Our Bayesian nonparametric approach
utilizes a hierarchical Dirichlet process prior to learn an unknown number of
persistent, smooth dynamical modes. We additionally employ automatic relevance
determination to infer a sparse set of dynamic dependencies allowing us to
learn SLDS with varying state dimension or switching VAR processes with varying
autoregressive order. We develop a sampling algorithm that combines a truncated
approximation to the Dirichlet process with efficient joint sampling of the
mode and state sequences. The utility and flexibility of our model are
demonstrated on synthetic data, sequences of dancing honey bees, the IBOVESPA
stock index, and a maneuvering target tracking application.Comment: 50 pages, 7 figure
A Unifying review of linear gaussian models
Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models
Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables
Understanding nonlinear dynamical systems (NLDSs) is challenging in a variety
of engineering and scientific fields. Dynamic mode decomposition (DMD), which
is a numerical algorithm for the spectral analysis of Koopman operators, has
been attracting attention as a way of obtaining global modal descriptions of
NLDSs without requiring explicit prior knowledge. However, since existing DMD
algorithms are in principle formulated based on the concatenation of scalar
observables, it is not directly applicable to data with dependent structures
among observables, which take, for example, the form of a sequence of graphs.
In this paper, we formulate Koopman spectral analysis for NLDSs with structures
among observables and propose an estimation algorithm for this problem. This
method can extract and visualize the underlying low-dimensional global dynamics
of NLDSs with structures among observables from data, which can be useful in
understanding the underlying dynamics of such NLDSs. To this end, we first
formulate the problem of estimating spectra of the Koopman operator defined in
vector-valued reproducing kernel Hilbert spaces, and then develop an estimation
procedure for this problem by reformulating tensor-based DMD. As a special case
of our method, we propose the method named as Graph DMD, which is a numerical
algorithm for Koopman spectral analysis of graph dynamical systems, using a
sequence of adjacency matrices. We investigate the empirical performance of our
method by using synthetic and real-world data.Comment: 34 pages with 4 figures, Published in Neural Networks, 201
- β¦