43 research outputs found
Identification of Non-linear Nonautonomous State Space Systems from Input-Output Measurements
This paper presents a method to determine a nonlinear state-space model from a finite number of measurements of the inputs and outputs. The method is based on embedding theory for nonlinear systems, and can be viewed as an extension of the subspace identification method for linear systems. The paper describes the underlying theory and provides some guidelines for using the method in practice. To illustrate the use of the identification method, it was applied to a second-order nonlinear system
Bilinear State Space Systems for Nonlinear Dynamical Modelling
We discuss the identification of multiple input, multiple output, discrete-time bilinear state space systems. We consider two identification problems. In the first case, the input to the system is a measurable white noise sequence. We show that it is possible to identify the system by solving a nonlinear optimization problem. The number of parameters in this optimization problem can be reduced by exploiting the principle of separable least squares. A subspace-based algorithm can be used to generate initial estimates for this nonlinear identification procedure. In the second case, the input to the system is not measurable. This makes it a much more difficult identification problem than the case with known inputs. At present, we can only solve this problem for a certain class of single input, single output bilinear state space systems, namely bilinear systems in phase variable form
An efficient implementation of maximum likelihood identification of LTI state-space models by local gradient search
We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting parameter space. The output error identification problem is discussed, and its extension to maximum likelihood identification is explained. We show that the maximum likelihood framework yields parameter errors that converge to the Cramer-Rao bound. Furthermore, the implementation is shown to be fast and able to handle large sample size problems