182,336 research outputs found
System Identification for Nonlinear Control Using Neural Networks
An approach to incorporating artificial neural networks in nonlinear, adaptive control systems is described. The controller contains three principal elements: a nonlinear inverse dynamic control law whose coefficients depend on a comprehensive model of the plant, a neural network that models system dynamics, and a state estimator whose outputs drive the control law and train the neural network. Attention is focused on the system identification task, which combines an extended Kalman filter with generalized spline function approximation. Continual learning is possible during normal operation, without taking the system off line for specialized training. Nonlinear inverse dynamic control requires smooth derivatives as well as function estimates, imposing stringent goals on the approximating technique
A Class of Logistic Functions for Approximating State-Inclusive Koopman Operators
An outstanding challenge in nonlinear systems theory is identification or
learning of a given nonlinear system's Koopman operator directly from data or
models. Advances in extended dynamic mode decomposition approaches and machine
learning methods have enabled data-driven discovery of Koopman operators, for
both continuous and discrete-time systems. Since Koopman operators are often
infinite-dimensional, they are approximated in practice using
finite-dimensional systems. The fidelity and convergence of a given
finite-dimensional Koopman approximation is a subject of ongoing research. In
this paper we introduce a class of Koopman observable functions that confer an
approximate closure property on their corresponding finite-dimensional
approximations of the Koopman operator. We derive error bounds for the fidelity
of this class of observable functions, as well as identify two key learning
parameters which can be used to tune performance. We illustrate our approach on
two classical nonlinear system models: the Van Der Pol oscillator and the
bistable toggle switch.Comment: 8 page
The Hitchhiker's Guide to Nonlinear Filtering
Nonlinear filtering is the problem of online estimation of a dynamic hidden
variable from incoming data and has vast applications in different fields,
ranging from engineering, machine learning, economic science and natural
sciences. We start our review of the theory on nonlinear filtering from the
simplest `filtering' task we can think of, namely static Bayesian inference.
From there we continue our journey through discrete-time models, which is
usually encountered in machine learning, and generalize to and further
emphasize continuous-time filtering theory. The idea of changing the
probability measure connects and elucidates several aspects of the theory, such
as the parallels between the discrete- and continuous-time problems and between
different observation models. Furthermore, it gives insight into the
construction of particle filtering algorithms. This tutorial is targeted at
scientists and engineers and should serve as an introduction to the main ideas
of nonlinear filtering, and as a segway to more advanced and specialized
literature.Comment: 64 page
Derivative observations in Gaussian Process models of dynamic systems
Gaussian processes provide an approach to nonparametric modelling which allows a straightforward combination of function and derivative observations in an empirical model. This is of particular importance in identification of nonlinear dynamic systems from experimental data. 1)It allows us to combine derivative information, and associated uncertainty with normal function observations into the learning and inference process. This derivative information can be in the form of priors specified by an expert or identified from perturbation data close to equilibrium. 2) It allows a seamless fusion of multiple local linear models in a consistent manner, inferring consistent models and ensuring that integrability constraints are met. 3) It improves dramatically the computational efficiency of Gaussian process models for dynamic system identification, by summarising large quantities of near-equilibrium data by a handful of linearisations, reducing the training size - traditionally a problem for Gaussian process models
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
Recurrent neural networks (RNNs) are widely used in computational
neuroscience and machine learning applications. In an RNN, each neuron computes
its output as a nonlinear function of its integrated input. While the
importance of RNNs, especially as models of brain processing, is undisputed, it
is also widely acknowledged that the computations in standard RNN models may be
an over-simplification of what real neuronal networks compute. Here, we suggest
that the RNN approach may be made both neurobiologically more plausible and
computationally more powerful by its fusion with Bayesian inference techniques
for nonlinear dynamical systems. In this scheme, we use an RNN as a generative
model of dynamic input caused by the environment, e.g. of speech or kinematics.
Given this generative RNN model, we derive Bayesian update equations that can
decode its output. Critically, these updates define a 'recognizing RNN' (rRNN),
in which neurons compute and exchange prediction and prediction error messages.
The rRNN has several desirable features that a conventional RNN does not have,
for example, fast decoding of dynamic stimuli and robustness to initial
conditions and noise. Furthermore, it implements a predictive coding scheme for
dynamic inputs. We suggest that the Bayesian inversion of recurrent neural
networks may be useful both as a model of brain function and as a machine
learning tool. We illustrate the use of the rRNN by an application to the
online decoding (i.e. recognition) of human kinematics
Simultaneous Learning of Nonlinear Manifold and Dynamical Models for High-dimensional Time Series
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.National Science Foundation (IIS 0308213, IIS 0329009, CNS 0202067
- …